Commit Graph

3735 Commits

Author SHA1 Message Date
Nico Weber 846bf1d43f fix doc grammar-o to cycle bots 2020-01-02 12:11:59 -05:00
Nicolas Vasilache cd17c06989 [mlir][Linalg] NFC - Make consistent use of op.emitOpError
Summary: This is part of an ongoing cleanup and uniformization work.

Reviewers: ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72084
2020-01-02 10:12:14 -05:00
Nicolas Vasilache a9d9aadcdf [mlir][Linalg] NFC - Cleanup Linalg Declarative Transformations
Summary:
This is part of an ongoing cleanup and uniformization work.

This diff performs 3 types of cleanups:
1. Uniformize transformation names.
2. Replace all pattern operands that need not be captured by `$_`
3. Replace all usage of pattern captured op by the normalized `op` name (instead of positional parameters such as `$0`)

Reviewers: ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72081
2020-01-02 10:11:37 -05:00
Nicolas Vasilache 324fd5902a [mlir][Linalg] NFC - Rename ViewTraits -> StructuredOpTraits
Summary: This is part of an ongoing cleanup and uniformization work.

Reviewers: ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72079
2020-01-02 09:40:25 -05:00
Nicolas Vasilache afc25a43dc [mlir][Linalg] NFC - Rename LinalgGeneric -> GenericLinalg
Summary: This is part of an ongoing cleanup and uniformization work.

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72078
2020-01-02 09:30:34 -05:00
Lei Zhang 0359e1d6be [mlir][spirv] NFC: Move shader ABI attributes to a new file
This allows us to include the definitions of these attributes in
other files without pulling in all dependencies for lowering.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D72054
2020-01-01 22:43:09 -05:00
Lei Zhang 5d38b2610f [mlir][spirv] Fix links in docs and update dialect docs
Summary:
This commit fixes links to code directories and uses doc links on
mlir.llvm.org where possible. The docs in TableGen dialect definition
is also updated to reflect recent developments.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D72051
2020-01-01 22:39:51 -05:00
Fangrui Song eeef50b1fe [mlir] Fix -Wrange-loo-analysis warnings
for (const auto &x : llvm::zip(..., ...))

->

for (auto x : llvm::zip(..., ...))

The return type of zip() is a wrapper that wraps a tuple of references.

> warning: loop variable 'p' is always a copy because the range of type 'detail::zippy<detail::zip_shortest, ArrayRef<long> &, ArrayRef<long> &>' does not return a reference [-Wrange-loop-analysis]
2020-01-01 16:06:04 -08:00
Alexandre Ganea 6656e961c0 [mlir] Fix compilation warnings
Fixes:
- (MSVC) F:\llvm-project\mlir\lib\Dialect\Linalg\Analysis\DependenceAnalysis.cpp(103): warning C4551: function call missing argument list
- (Clang) tools\mlir\lib\Dialect\SPIRV\SPIRVCanonicalization.inc(232,1): warning: unused function 'populateWithGenerated' [-Wunused-function]
2020-01-01 17:29:04 -05:00
Alexandre Ganea 316f6003ef [mlir] Fix linking with LLD
The issue is that /WHOLEARCHIVE is interpreted differently in LLD, which needs the same exact path as the .lib; whereas link.exe can take the library name, withoutout a path or extension, if that was already supplied on the cmd-line. I'll write a follow-up patch to fix the issue in LLD.
2020-01-01 17:29:04 -05:00
Alexandre Ganea 2b223bd1c7 [mlir] Fix warnings when compiling with Clang 9.0
Fixes: warning: comparison of integers of different signs: 'const unsigned int' and '(anonymous namespace)::OperationPrinter::(anonymous enum at F:\llvm-project\mlir\lib\IR\AsmPrinter.cpp:1444:3)' [-Wsign-compare]
2020-01-01 17:29:04 -05:00
Jacques Pienaar 7544cb8807 [mlir][docs] Remove redundant path prefix
./ is not needed.
2019-12-31 11:03:40 -08:00
Jacques Pienaar 430bba2a0f [mlir] Make code blocks more consistent
Use the same form specification for the same type of code.
2019-12-31 09:54:16 -08:00
Nicolas Vasilache f5b7dd3c9e [mlir][Linalg] Delete unused LinalgLibraryOps.td
Summary: This has been previously renamed to LinalgStructuredOps.td

Reviewers: ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits, ftynse

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72013
2019-12-31 09:58:33 -05:00
River Riddle 0d6ebb4f0d [mlir] Refactor operation results to use a single use list for all results of the operation.
Summary: A new class is added, IRMultiObjectWithUseList, that allows for representing an IR use list that holds multiple sub values(used in this case for OpResults). This class provides all of the same functionality as the base IRObjectWithUseList, but for specific sub-values. This saves a word per operation result and is a necessary step in optimizing the layout of operation results. For now the use list is placed on the operation itself, so zero-result operations grow by a word. When the work for optimizing layout is finished, this can be moved back to being a trailing object based on memory/runtime benchmarking.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D71955
2019-12-30 20:50:07 -08:00
Kern Handa cde071c4bf [mlir] Update mlir/CMakeLists.txt to install *.td files
Currently when you build the `install` target, TableGen files don't get
installed.

TableGen files are needed when authoring new MLIR dialects, but right
now they're missing when using the pre-built binaries.

Differential Revision: https://reviews.llvm.org/D71958
2019-12-29 18:05:11 +01:00
Tung Le Duc e5957ac3d7 [mlir] Fix the wrong computation of dynamic strides for lowering AllocOp to LLVM
Leftover change from before the MLIR merge, reviewed at accepted at
https://github.com/tensorflow/mlir/pull/338.
2019-12-28 23:33:28 +01:00
River Riddle f83a8efe87 [mlir] Merge the successor operand count into BlockOperand.
Summary: The successor operand counts are directly tied to block operands anyways, and this simplifies the trailing objects of Operation(i.e. one less computation to perform).

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D71949
2019-12-27 20:35:31 -08:00
Lei Zhang b30d87a90b [mlir][spirv] Add basic definitions for supporting availability
SPIR-V has a few mechanisms to control op availability: version,
extension, and capabilities. These mechanisms are considered as
different availability classes.

This commit introduces basic definitions for modelling SPIR-V
availability classes. Specifically, an `Availability` class is
added to SPIRVBase.td, along with two subclasses: MinVersion
and MaxVersion for versioning. SPV_Op is extended to take a
list of `Availability`. Each `Availability` instance carries
information for generating op interfaces for the corresponding
availability class and also the concrete availability
requirements.

With the availability spec on ops, we can now auto-generate the
op interfaces of all SPIR-V availability classes and also
synthesize the op's implementations of these interfaces. The
interface generation is done via new TableGen backends
-gen-avail-interface-{decls|defs}. The op's implementation is
done via -gen-spirv-avail-impls.

Differential Revision: https://reviews.llvm.org/D71930
2019-12-27 16:25:09 -05:00
Lei Zhang 596012b256 [mlir][spirv] Update docs regarding how to define new ops and types
This commit expands on the steps of defining a new SPIR-V op and
also provides pointers on how to define a new SPIR-V specific type.

Differential Revision: https://reviews.llvm.org/D71928
2019-12-27 15:33:09 -05:00
MaheshRavishankar c3d3569d4c [mlir] Convert std.and/std.or ops to spv.LogicalAnd/spv.LogicalOr
The conversion from std.and/std.or to spv.LogicalAnd/spv.LogicalOr is
only valid for boolean (i1) types. Modify BinaryOpPattern in
StandardToSPIRV.td to allow limiting the type of the operands for
which the pattern is applied.

Differential Revision: https://reviews.llvm.org/D71881
2019-12-27 11:33:17 -08:00
Lei Zhang 69d85f805a [MLIR][spirv] Fix links in docs after repo migration
Summary:
This commit updates links to SPIR-V dialect code to LLVM monorepo
on GitHub. It also points to the operation doc on mlir.llvm.org.

Reviewers: mravishankar, denis13, ftynse

Reviewed By: ftynse

Subscribers: merge_guards_bot, mehdi_amini, rriddle, jpienaar, burmako, shauheen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71926
2019-12-27 09:25:38 -05:00
wyzhao 2e5a75581c [mlir] fix typo in a comment
Trivial patch, reviewed and accepted on
https://github.com/tensorflow/mlir/pull/336 before MLIR merge.
2019-12-27 12:15:26 +01:00
Uday Bondhugula be775a0038 [MLIR] [NFC] fix unused var warning
Summary:
Fix this warning:
`
[69/106] Building CXX object tools/mlir/lib/Dialect/StandardOps/CMakeFiles/MLIRStandardOps.dir/Ops.cpp.o
/home/uday/llvm-project/mlir/lib/Dialect/StandardOps/Ops.cpp: In member function ‘virtual mlir::PatternMatchResult {anonymous}::ViewOpShapeFolder::matchAndRewrite(mlir::ViewOp, mlir::PatternRewriter&) const’:
/home/uday/llvm-project/mlir/lib/Dialect/StandardOps/Ops.cpp:2575:14: warning: variable ‘dynamicOffsetOperandCount’ set but not used [-Wunused-but-set-variable]
 2575 |     unsigned dynamicOffsetOperandCount = 0;
`

Reviewers: rriddle, mehdi_amini, ftynse

Reviewed By: ftynse

Subscribers: jpienaar, burmako, shauheen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71922
2019-12-27 12:04:46 +01:00
Alex Zinenko cda94d3e8a [mlir] Floating constants for import-llvm
Summary:
`mlir-translate -import-llvm test.ll`  was going into segmentation fault if `test.ll` had `float` or `double` constants.
For example,
```
%3 = fadd double 3.030000e+01, %0
```
Now, it is handled in `Importer::getConstantAsAttr` (similar behaviour as normal integers)
Added tests for FP arithmetic

Reviewers: ftynse, mehdi_amini

Reviewed By: ftynse, mehdi_amini

Subscribers: shauheen, mehdi_amini, rriddle, jpienaar, burmako, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71912
2019-12-27 11:48:24 +01:00
Eric Christopher 371038e3ff Add an __attribute__((unused)) to populateWithGenerated since it might
not be used where defined and is autogenerated.
2019-12-26 18:48:59 -08:00
Eric Christopher e1838a1789 Fix a -Wcovered-switch-default warning by moving the unreachable out of the
covered switch.
2019-12-26 18:34:41 -08:00
Eric Christopher 3d18ce7154 Remove an unused static function. 2019-12-26 18:34:14 -08:00
Eric Christopher 3009cee75f Fix a -Wcovered-switch-default warning by moving the unreachable out of the
covered switch.
2019-12-26 18:29:54 -08:00
Eric Christopher 30617e4b9c Remove unused static function. 2019-12-26 18:20:15 -08:00
Mehdi Amini b52cb5688b Add a clang-tidy configuration file for MLIR, it is using camelBack for naming at the moment 2019-12-26 21:42:01 +00:00
chelxom d2a8e14177 Fix the MLIR Vim syntax file: the keyword group was missing 2019-12-26 04:50:38 +00:00
Hideto Ueno 1497a4350e [MLIR][NFC] Insert const_cast to avoid warning
Reviewers: rriddle

Reviewed By: rriddle

Subscribers: mehdi_amini

Differential Revision: https://reviews.llvm.org/D71853
2019-12-25 15:51:26 +09:00
Fangrui Song 020ca0cf2f [mlir] Fix -Wunneeded-internal-declaration 2019-12-24 10:33:30 -08:00
Sylvestre Ledru 95b69a7082
mlir README.md: Fix the syntax 2019-12-24 13:31:07 +01:00
Mehdi Amini 34766da067 Add the Apache2 with LLVM exceptions license to MLIR
It seems that every subproject has a license file instead of having a top-level one.
2019-12-24 00:58:06 -08:00
Mehdi Amini c6a5534ea4 Remove static MLIR doc ; they are already on the website 2019-12-24 00:53:35 -08:00
River Riddle 1399281d58 NFC: Rename printOptionValue to printValue to fix MSVC build.
MSVC has trouble resolving the static 'printOptionValue' from the method on llvm:🆑:opt/list. This change renames the static method to avoid this conflict.
2019-12-23 19:35:25 -08:00
Mehdi Amini 5b4a01d4a6 Adjust some MLIR paths and docs 2019-12-24 02:23:01 +00:00
Mehdi Amini ac6dce12e0 Remove pybind11-based bindings
These bindings were added as an experiment, and never had a CMake configuration.
We will bring back python bindings after picking carefully our dependency and the kind
of layering we expect to expose for these bindings.

PiperOrigin-RevId: 286963717
2019-12-23 17:44:06 -08:00
River Riddle 21610e6651 Refactor the way that pass options are specified.
This change refactors pass options to be more similar to how statistics are modeled. More specifically, the options are specified directly on the pass instead of in a separate options class. (Note that the behavior and specification for pass pipelines remains the same.) This brings about several benefits:
* The specification of options is much simpler
* The round-trip format of a pass can be generated automatically
* This gives a somewhat deeper integration with "configuring" a pass, which we could potentially expose to users in the future.

PiperOrigin-RevId: 286953824
2019-12-23 16:48:22 -08:00
River Riddle e62a69561f NFC: Replace ValuePtr with Value and remove it now that Value is value-typed.
ValuePtr was a temporary typedef during the transition to a value-typed Value.

PiperOrigin-RevId: 286945714
2019-12-23 16:36:53 -08:00
River Riddle 5d5bd2e1da Change the `notifyRootUpdated` API to be transaction based.
This means that in-place, or root, updates need to use explicit calls to `startRootUpdate`, `finalizeRootUpdate`, and `cancelRootUpdate`. The major benefit of this change is that it enables in-place updates in DialectConversion, which simplifies the FuncOp pattern for example. The major downside to this is that the cases that *may* modify an operation in-place will need an explicit cancel on the failure branches(assuming that they started an update before attempting the transformation).

PiperOrigin-RevId: 286933674
2019-12-23 16:26:15 -08:00
Lei Zhang a5d5d29125 Update SPIR-V.md
This CL updates SPIR-V.md to reflect recent developments
in the SPIR-V dialect and its conversions.

Along the way, also updates the doc for define_inst.sh.

PiperOrigin-RevId: 286933546
2019-12-23 16:15:52 -08:00
River Riddle ab46543ceb Resubmit: ReImplement the Value classes as value-typed objects wrapping an internal pointer storage.
This will enable future commits to reimplement the internal implementation of OpResult without needing to change all of the existing users. This is part of a chain of commits optimizing the size of operation results.

PiperOrigin-RevId: 286930047
2019-12-23 16:05:05 -08:00
MLIR Team 268365ab01 Automated rollback of commit f603a50109
PiperOrigin-RevId: 286924059
2019-12-23 15:54:44 -08:00
River Riddle f603a50109 ReImplement the Value classes as value-typed objects wrapping an internal pointer storage.
This will enable future commits to reimplement the internal implementation of OpResult without needing to change all of the existing users. This is part of a chain of commits optimizing the size of operation results.

PiperOrigin-RevId: 286919966
2019-12-23 15:44:00 -08:00
Mehdi Amini 56222a0694 Adjust License.txt file to use the LLVM license
PiperOrigin-RevId: 286906740
2019-12-23 15:33:37 -08:00
River Riddle 35807bc4c5 NFC: Introduce new ValuePtr/ValueRef typedefs to simplify the transition to Value being value-typed.
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ

This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.

PiperOrigin-RevId: 286844725
2019-12-22 22:00:23 -08:00
Manuel Freiberger 22954a0e40 Add integer bit-shift operations to the standard dialect.
Rename the 'shlis' operation in the standard dialect to 'shift_left'. Add tests
for this operation (these have been missing so far) and add a lowering to the
'shl' operation in the LLVM dialect.

Add also 'shift_right_signed' (lowered to LLVM's 'ashr') and 'shift_right_unsigned'
(lowered to 'lshr').

The original plan was to name these operations 'shift.left', 'shift.right.signed'
and 'shift.right.unsigned'. This works if the operations are prefixed with 'std.'
in MLIR assembly. Unfortunately during import the short form is ambigous with
operations from a hypothetical 'shift' dialect. The best solution seems to omit
dots in standard operations for now.

Closes tensorflow/mlir#226

PiperOrigin-RevId: 286803388
2019-12-22 10:02:13 -08:00
Alex Zinenko dcc14f0865 Make Type and Attribute classes trivially copyable
This requires using explicitly default copy constructor and copy assignment
operator instead of hand-rolled ones. These classes are indeed cheap to copy
since they are wrappers around a pointer to the implementation. This change
makes sure templated code can use standard type traits to understand that
copying such objects is cheap and appeases analysis tools such as clang-tidy.

PiperOrigin-RevId: 286725565
2019-12-21 09:45:24 -08:00
River Riddle ee71ca1d5c NFC: Move the classes related to Pass options into a new header file PassOptions.h
This will make refactoring and adding additional features to the pass options infrastructure simpler in followup commits.

PiperOrigin-RevId: 286687564
2019-12-20 22:45:52 -08:00
Aart Bik 1d47564a53 [VectorOps] unify vector dialect "subscripts"
PiperOrigin-RevId: 286650682
2019-12-20 15:33:04 -08:00
Aart Bik 67c019ddac [VectorOps] remove redundant returns from invalid ops test
PiperOrigin-RevId: 286640660
2019-12-20 14:27:42 -08:00
Uday Bondhugula e5691c512f fix isValidDim for block arg case
- a block argument associated with an arbitrary op can't be a valid
  dimensional identifier; it has to be the block argument of either
  a function op or an affine.for.

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#331

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/331 from bondhugula:valid_dim 3273b4fcbaa31fb7b6671d93c9e42a6b2a6a4e4c
PiperOrigin-RevId: 286593693
2019-12-20 09:44:03 -08:00
Christian Sigg 42d46b4efa Add gpu.shuffle op.
This will allow us to lower most of gpu.all_reduce (when all_reduce
doesn't exist in the target dialect) within the GPU dialect, and only do
target-specific lowering for the shuffle op.

PiperOrigin-RevId: 286548256
2019-12-20 02:52:52 -08:00
Frank Laub 7811ad3c2b Allow dialect to create friendly names for region arguments
This is the block argument equivalent of the existing `getAsmResultNames` hook.

Closes tensorflow/mlir#329

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/329 from plaidml:flaub-region-arg-names fc7876f2d1335024e441083cd25263fd6247eb7d
PiperOrigin-RevId: 286523299
2019-12-19 22:16:07 -08:00
Jacques Pienaar b6d54a1ba3 Unique trait list during ODS Operator trait construction
Concatting lists in TableGen is easy, creating unique lists less so. There is no reason for duplicated op traits so we could throw an error instead but duplicates could occur due to concatting different list of traits in ODS (e.g., for convenience reasons), so just dedup them during Operator trait construction instead.

PiperOrigin-RevId: 286488423
2019-12-19 16:44:56 -08:00
Andy Davis 8020ad3e39 [VectorOps] Update vector transfer_read/write ops to operatate on memrefs with vector element type.
Update vector transfer_read/write ops to operatate on memrefs with vector element type.
This handle cases where the memref vector element type represents the minimal memory transfer unit (or multiple of the minimal memory transfer unit).

PiperOrigin-RevId: 286482115
2019-12-19 16:05:32 -08:00
Nicolas Vasilache 6685282253 Restructure and update Linalg ODS and documentation - NFC
This CL allows specifying an additional name for specifying the .td file that is used to generate the doc for a dialect. This is necessary for a dialect like Linalg which has different "types" of ops that are used in different contexts.

This CL also restructures the Linalg documentation and renames LinalgLibraryOps -> LinalgStructuredOps but is otherwise NFC.

PiperOrigin-RevId: 286450414
2019-12-19 13:17:35 -08:00
Andy Davis 1d798b1d27 [VectorOps] Add vector ReshapeOp to the VectorOps dialect.
Adds vector ReshapeOp to the VectorOps dialect. An aggregate vector reshape operation, which aggregates multiple hardware vectors, can enable optimizations during decomposition (e.g. loading one input hardware vector and performing multiple rotate and scatter store operations to the vector output).

PiperOrigin-RevId: 286440658
2019-12-19 12:27:59 -08:00
Alex Zinenko 1bcd8ef32f LLVMFuncOp: implement addEntryBlock
This function has been declared as a part of the LLVMFuncOp interface but never
implemented.

Closes tensorflow/mlir#325.

PiperOrigin-RevId: 286439619
2019-12-19 12:16:51 -08:00
Aart Bik 15f800f4bc [VectorOps] minor cleanup: vector dialect "subscripts" are i32
Introduces some centralized methods to move towards
consistent use of i32 as vector subscripts.

Note: sizes/strides/offsets attributes are still i64
PiperOrigin-RevId: 286434133
2019-12-19 11:51:08 -08:00
Alex Zinenko efadb6b838 Detemplatize ModuleTranslation::lookupValues
This function template has been introduced in the early days of MLIR to work
around the absence of common type for ranges of values (operands, block
argumeents, vectors, etc). Core IR now provides ValueRange for exactly this
purpose. Use it instead of the template parameter.

PiperOrigin-RevId: 286431338
2019-12-19 11:35:57 -08:00
Nicolas Vasilache 50f9be6d2d Add runtime utils support for print_memref_i8
This CL adds print_memref_i8 along with a unit test.

PiperOrigin-RevId: 286299237
2019-12-18 17:32:35 -08:00
Aart Bik a1e84db66e [VectorOps] Replace iostream with stdio in support lib for vector.print
PiperOrigin-RevId: 286252829
2019-12-18 13:24:30 -08:00
Sean Silva 553f794b6f Add a couple useful LLVM_DEBUG's to the inliner.
This makes it easier to narrow down on ops that are preventing inlining.

PiperOrigin-RevId: 286243868
2019-12-18 12:33:30 -08:00
River Riddle 7b3adda8f4 Move the specializations of VectorTransferRewriter::matchAndRewrite back into the anonymous namespace.
This appeases the GCC bug related to specializations in a different namespace.

PiperOrigin-RevId: 286234667
2019-12-18 11:53:57 -08:00
Marcel Koester 6054610bbe Added LLVM ops and lowering phases from standard dialect for FAbs, FCeil, Cos, FNeg, CopySign.
Added test cases for the newly added LLVM operations and lowering features.

Closes tensorflow/mlir#300

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/300 from dfki-jugr:std_to_llvm da6168bbc1a369ae2e99ad3881fdddd82f075dd4
PiperOrigin-RevId: 286231169
2019-12-18 11:42:43 -08:00
Aart Bik d9b500d3bb [VectorOps] Add vector.print definition, with lowering support
Examples:

  vector.print %f : f32
  vector.print %x : vector<4xf32>
  vector.print %y : vector<3x4xf32>
  vector.print %z : vector<2x3x4xf32>

LLVM lowering replaces these with fully unrolled calls
into a small runtime support library that provides some
basic printing operations (single value, opening closing
bracket, comma, newline).

PiperOrigin-RevId: 286230325
2019-12-18 11:31:34 -08:00
River Riddle c169852fc5 NFC: Remove forbidden include of <iostream>
See: https://llvm.org/docs/CodingStandards.html#include-iostream-is-forbidden
PiperOrigin-RevId: 286226467
2019-12-18 11:20:31 -08:00
River Riddle 29807ff5e4 Add support for providing a default implementation for an interface method.
This enables providing a default implementation of an interface method. This method is defined on the Trait that is attached to the operation, and thus has all of the same constraints and properties as any other interface method. This allows for interface authors to provide a conservative default implementation for certain methods, without requiring that all users explicitly define it. The default implementation can be specified via the argument directly after the interface method body:

  StaticInterfaceMethod<
    /*desc=*/"Returns whether two array of types are compatible result types for an op.",
    /*retTy=*/"bool",
    /*methodName=*/"isCompatibleReturnTypes",
    /*args=*/(ins "ArrayRef<Type>":$lhs, "ArrayRef<Type>":$rhs),
    /*methodBody=*/[{
      return ConcreteOp::isCompatibleReturnTypes(lhs, rhs);
    }],
    /*defaultImplementation=*/[{
      /// Returns whether two arrays are equal as strongest check for
      /// compatibility by default.
      return lhs == rhs;
    }]

PiperOrigin-RevId: 286226054
2019-12-18 11:09:11 -08:00
Jacques Pienaar d7e2cc9bd1 Update code block designations
'```mlir' is used to indicate the code block is MLIR code/should use MLIR syntax
highlighting, while '{.mlir}' was a markdown extension that used a style file
to color the background differently of the code block. The background color
extension was a custom one that we can retire given we have syntax
highlighting.

Also change '```td' to '```tablegen' to match chroma syntax highlighting
designation.

PiperOrigin-RevId: 286222976
2019-12-18 10:57:59 -08:00
River Riddle 2666b97314 NFC: Cleanup non-conforming usages of namespaces.
* Fixes use of anonymous namespace for static methods.
* Uses explicit qualifiers(mlir::) instead of wrapping the definition with the namespace.

PiperOrigin-RevId: 286222654
2019-12-18 10:46:48 -08:00
Uday Bondhugula 47034c4bc5 Introduce prefetch op: affine -> std -> llvm intrinsic
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.

Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.

  affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#225

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
2019-12-18 10:00:04 -08:00
River Riddle 4562e389a4 NFC: Remove unnecessary 'llvm::' prefix from uses of llvm symbols declared in `mlir` namespace.
Aside from being cleaner, this also makes the codebase more consistent.

PiperOrigin-RevId: 286206974
2019-12-18 09:29:20 -08:00
Alex Zinenko 24ab8362f2 Move function template definition to the header file. NFC
The definition of the function template LLVM::ModuleTranslation::lookupValues
has been located in a source file. As long as it has been the only file that
actually called into the function, this did not cause any problem. However, it
creates linking issues if the function is used from other translation units.

PiperOrigin-RevId: 286203078
2019-12-18 09:10:23 -08:00
Jacques Pienaar abcf5ff0cc Fix line break in LangRef
This was munging up the example with the text.

PiperOrigin-RevId: 286201762
2019-12-18 08:59:10 -08:00
Alex Zinenko 40ef46fba4 Harden the requirements to memory attribution types in gpu.func
When memory attributions are present in `gpu.func`, require that they are of
memref type and live in memoryspaces 3 and 5 for workgroup and private memory
attributions, respectively. Adapt the conversion from the GPU dialect to the
NVVM dialect to drop the private memory space from attributions as NVVM is able
to model them as local `llvm.alloca`s in the default memory space.

PiperOrigin-RevId: 286161763
2019-12-18 03:38:55 -08:00
MLIR Team c6c6a74d55 Add support for float and string attributes to the C API and python bindings
PiperOrigin-RevId: 286115042
2019-12-17 20:19:16 -08:00
River Riddle 5a0d4803f7 NFC: Use this-> to appease GCC bug related to template lambda.
GCC is unable to properly implicitly capture 'this' in generic lambdas. This bug is not fixed until 7.1.0:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67274

PiperOrigin-RevId: 286083427
2019-12-17 16:19:47 -08:00
River Riddle 74278dd01e NFC: Use TypeSwitch to simplify existing code.
PiperOrigin-RevId: 286066371
2019-12-17 14:57:41 -08:00
Andy Davis 6fa3bd5b3e Add pattern rewrite which splits a vector TransferWriteOp into slices according to the unrolling/slicing scheme of its InsertSlicesOp operand.
PiperOrigin-RevId: 286042578
2019-12-17 13:17:10 -08:00
Mahesh Ravishankar 319cca3bbe Add missing virtual inliner interface method in SPIR-V dialect.
The inline interface uses two methods to check legality of inling:
1) Can a region be inlined into another.
2) Can an operation be inlined into another.
Setting the former to true, allows the inliner to use the second for
legality checks. Add this method to the SPIR-V dialect inlining
interface.

PiperOrigin-RevId: 286041734
2019-12-17 13:06:05 -08:00
Alex Zinenko 42b3fe8335 Make it possible to override the lowering of MemRef to the LLVM dialect. NFC.
The lowering of MemRef types to the LLVM dialect is connected to the underlying
runtime representation of structured memory buffers. It has changed several
times in the past and reached the current state of a LLVM structured-typed
descriptor containing two pointers and all sizes. In several reported use
cases, a different, often simpler, lowering scheme is required. For example,
lowering statically-shaped memrefs to bare LLVM pointers to simplify aliasing
annotation. Split the pattern population functions into those include
memref-related operations and the remaining ones. Users are expected to extend
TypeConverter::convertType to handle the memref types differently.
PiperOrigin-RevId: 286030610
2019-12-17 12:10:04 -08:00
Alex Zinenko 62f498dcb7 ConversionToLLVMDialect doc: update the syntax for LLVM types
The syntax for LLVM dialect types changed twice since this document was
introduced. First, the quoted types are only prefixed with the dialect name
`!llvm` rather than with `!llvm.type`. Second, for types that are simple enough
(e.g., MLIR identifiers), the pretty form can be used instead of the quoted
form. The relevant commits updated the dialect documentation, but not the
conversion documentation. Use the valid type names in the conversion
documentation.

PiperOrigin-RevId: 286026153
2019-12-17 11:55:11 -08:00
Alex Zinenko 0bdc72d2df StdToLLVM conversion: drop getMemRefElementType utility function
This function has become redundant with MemRefDescriptor::getElementType and is
no longer necessary. Use the MemRefDescriptor pervasively to concentrate
descriptor-related logic in one place and drop the utility function.

PiperOrigin-RevId: 286024168
2019-12-17 11:43:59 -08:00
Alex Zinenko 651eaa03e8 Homogenize the description of the MemRef conversion to the LLVM dialect
The conversion procedure has been updated to reflect the most recent MemRef
descriptor proposal, but the documentation was only updated for the type
conversion, omitting the address computation section. Make sure the two
sections agree.

PiperOrigin-RevId: 286022684
2019-12-17 11:32:50 -08:00
Andy Davis d1fb285b32 Add pattern rewrite to forward vector tuple elements to their users.
User(TupleGetOp(ExtractSlicesOp(InsertSlicesOp(TupleOp(Producer))) -> User(Producer)

PiperOrigin-RevId: 286020249
2019-12-17 11:21:45 -08:00
Jin Mingjian 9f45a22441 fix a typo in OpDefinitions doc
[{ matched with }], rather than ]}

Closes tensorflow/mlir#320

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/320 from jinmingjian:patch-1 6b0870d02284f023bda2b28380960eb31d34f3b6
PiperOrigin-RevId: 286007638
2019-12-17 10:25:52 -08:00
River Riddle f44cf23297 Add a new utility class TypeSwitch to ADT.
This class provides a simplified mechanism for defining a switch over a set of types using llvm casting functionality. More specifically, this allows for defining a switch over a value of type T where each case corresponds to a type(CaseT) that can be used with dyn_cast<CaseT>(...). An example is shown below:

// Traditional piece of code:
Operation *op = ...;
if (auto constant = dyn_cast<ConstantOp>(op))
  ...;
else if (auto return = dyn_cast<ReturnOp>(op))
  ...;
else
  ...;

// New piece of code:
Operation *op = ...;
TypeSwitch<Operation *>(op)
  .Case<ConstantOp>([](ConstantOp constant) { ... })
  .Case<ReturnOp>([](ReturnOp return) { ... })
  .Default([](Operation *op) { ... });

Aside from the above, TypeSwitch supports return values, void return, multiple types per case, etc. The usability is intended to be very similar to StringSwitch.

(Using c++14 template lambdas makes everything even nicer)
More complex example of how this makes certain things easier:
LogicalResult process(Constant op);
LogicalResult process(ReturnOp op);
LogicalResult process(FuncOp op);

TypeSwitch<Operation *, LogicalResult>(op)
  .Case<ConstantOp, ReturnOp, FuncOp>([](auto op) { return process(op); })
  .Default([](Operation *op) { return op->emitError() << "could not be processed"; });

PiperOrigin-RevId: 286003613
2019-12-17 10:08:06 -08:00
MLIR Team 6e581e29a4 Integrate from upstream at revision e4fce659a7.
PiperOrigin-RevId: 285982330
2019-12-17 08:13:14 -08:00
Andy Davis 038ad1d856 Add pattern rewrite which splits a vector TransferReadOp into slices according to the unrolling/slicing scheme of its ExtractSlicesOp user.
PiperOrigin-RevId: 285975613
2019-12-17 07:29:06 -08:00
Tres Popp 8d68fe684e Replace code with equivalent satisfiesLLVMModule() function call.
This is a general code cleanup and should be a NFC.

PiperOrigin-RevId: 285972718
2019-12-17 07:05:40 -08:00
Andy Davis 4e825c59be Update vector op unrolling transformation to generate ExtractSlicesOp and InsertSlicesOp (instead of less structured chain of StridedSliceOps and InsertStridedSliceOps).
PiperOrigin-RevId: 285968051
2019-12-17 06:27:01 -08:00
Mahesh Ravishankar 80ec474a65 Add atomic operations to SPIR-V dialect.
Some changes to the dialect generation script to allow specification
of different base class to derive from in ODS.

PiperOrigin-RevId: 285859230
2019-12-16 15:05:51 -08:00
Mahesh Ravishankar a0557ea9d6 Fix (de)serialization generation for SPV_ScopeAttr, SPV_MemorySemanticsAttr.
Scope and Memory Semantics attributes need to be serialized as a
constant integer value and the <id> needs to be used to specify the
value. Fix the auto-generated SPIR-V (de)serialization to handle this.

PiperOrigin-RevId: 285849431
2019-12-16 14:23:08 -08:00
Lei Zhang 659150b570 [spirv] Re-enable nested loop (de)serialization test
PiperOrigin-RevId: 285849308
2019-12-16 14:21:52 -08:00
Nicolas Vasilache 3c179b6575 Add edsc::ops for pointwise, conv and dilated_conv
This CL adds more Linalg EDSC ops and tests to support building pointwise operations along with conv and dilated_conv.
This also fixes a bug in the existing linalg_matmul EDSC and beefs up the test.

The current set of ops is already enough to build an interesting, albeit simple, model used internally.

PiperOrigin-RevId: 285838012
2019-12-16 13:42:38 -08:00
Andy Davis 11e92875f0 Add InsertSlicesOp to the VectorOps dialect.
PiperOrigin-RevId: 285830394
2019-12-16 12:56:38 -08:00
Alex Zinenko 6273fa0c6a Plug gpu.func into the GPU lowering pipelines
This updates the lowering pipelines from the GPU dialect to lower-level
dialects (NVVM, SPIRV) to use the recently introduced gpu.func operation
instead of a standard function annotated with an attribute. In particular, the
kernel outlining is updated to produce gpu.func instead of std.func and the
individual conversions are updated to consume gpu.funcs and disallow standard
funcs after legalization, if necessary. The attribute "gpu.kernel" is preserved
in the generic syntax, but can also be used with the custom syntax on
gpu.funcs. The special kind of function for GPU allows one to use additional
features such as memory attribution.

PiperOrigin-RevId: 285822272
2019-12-16 12:12:48 -08:00
River Riddle ab610e8a99 Insert signature-converted blocks into a region with a parent operation.
This keeps the IR valid and consistent as it is expected that each block should have a valid parent region/operation. Previously, converted blocks were kept floating without a valid parent region.

PiperOrigin-RevId: 285821687
2019-12-16 12:09:45 -08:00
Alex Zinenko ed749b7689 Make "LowerToCFG" an operation pass
The conversion from the Loops dialect to the Standard dialect, also known as
loop-to-cfg lowering, has been historically a function pass. It can be required
on non-Standard function Ops, in particular the recently introduced GPU
functions. Make the conversion an operation pass instead of a function pass.

PiperOrigin-RevId: 285814560
2019-12-16 11:36:02 -08:00
Jose Ignacio Gomez 3ae56c4135 [Linalg] Expose subview promotion as a declarative pattern
This PR targest issue tensorflow/mlir#295. It exposes the already existing
subiew promotion pass as a declarative pattern

Change-Id: If901ebef9fb53fcd0b12ecc536f6b174ce320b92

Closes tensorflow/mlir#315

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/315 from tetuante:issue295 8e5f268b6d85f31015c33505329dbd7a4db97ac5
PiperOrigin-RevId: 285801463
2019-12-16 10:50:45 -08:00
Mehdi Amini c290e993b2 Remove unused variable (fix warning) NFC
PiperOrigin-RevId: 285799680
2019-12-16 10:28:44 -08:00
Aart Bik cd5dab8ad7 [VectorOps] Add [insert/extract]element definition together with lowering to LLVM
Similar to insert/extract vector instructions but
(1) work on 1-D vectors only
(2) allow for a dynamic index

  %c3 = constant 3 : index
  %0 = vector.insertelement %arg0, %arg1[%c : index] : vector<4xf32>
  %1 = vector.extractelement %arg0[%c3 : index] : vector<4xf32>

PiperOrigin-RevId: 285792205
2019-12-16 09:52:46 -08:00
Andy Davis 73ec37c8bb Adds ExtractSlicesOp to the VectorOps dialect.
ExtractSlicesOp extracts slices of its vector operand and with a specified tiling scheme.
This operation centralizes the tiling scheme around a single op, which simplifies vector op unrolling and subsequent pattern rewrite transformations.

PiperOrigin-RevId: 285761129
2019-12-16 06:39:09 -08:00
Alex Zinenko 0684aa9a8b Make memref promotion during std->LLVM lowering the default calling convention
During the conversion from the standard dialect to the LLVM dialect,
memref-typed arguments are promoted from registers to memory and passed into
functions by pointer. This had been introduced into the lowering to work around
the abesnce of calling convention modeling in MLIR to enable better
interoperability with LLVM IR generated from C, and has been exerciced for
several months. Make this promotion the default calling covention when
converting to the LLVM dialect. This adds the documentation, simplifies the
code and makes the conversion consistent across function operations and
function types used in other places, e.g. in high-order functions or
attributes, which would not follow the same rule previously.

PiperOrigin-RevId: 285751280
2019-12-16 05:17:14 -08:00
Tres Popp 44fc7d72b3 Remove LLVM dependency on mlir::Module and instead check Traits.
PiperOrigin-RevId: 285724678
2019-12-16 01:45:44 -08:00
Uday Bondhugula 97af932272 Splat op doc - fix misformat / update tablegen op desc. comment
- bring op description comment in sync with the doc
- fix misformat in doc

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#317

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/317 from bondhugula:quickfix 7fcd945b318c973b2488b702874c87526855c8ef
PiperOrigin-RevId: 285574527
2019-12-14 11:22:24 -08:00
Smit Hinsu 2d22b1e04e Add verifyCompatibleShape function overload with shapes
PiperOrigin-RevId: 285574334
2019-12-14 11:18:38 -08:00
Nicolas Vasilache 3ef15a80d2 Reconcile struct and class for NestedPatternMatchers - NFC
This removes a warning and fixes a potential ABI issue on Windows.

PiperOrigin-RevId: 285502010
2019-12-13 17:51:15 -08:00
Nicolas Vasilache 200beb8446 Apply a level of sugaring to the linalg.generic EDSC - NFC
Make the declarative C++ builder API simpler to use so we can start chaining these ops together.

PiperOrigin-RevId: 285496266
2019-12-13 17:39:46 -08:00
River Riddle 7ac42fa26e Refactor various canonicalization patterns as in-place folds.
This is more efficient, and allows for these to fire in more situations: e.g. createOrFold, DialectConversion, etc.

PiperOrigin-RevId: 285476837
2019-12-13 17:19:02 -08:00
Jing Pu 27ae92516b Skip generating C++ for "DeclareOpInterfaceMethods" in op interface gen.
This is needed for calling the generator on a .td file that contains both OpInterface definitions and op definitions with DeclareOpInterfaceMethods<...> Traits.

PiperOrigin-RevId: 285465784
2019-12-13 17:08:33 -08:00
Nicolas Vasilache 7923abd357 Add a layer of EDSC for linalg.GenericOp
This will be evolved into a simple programming model for custom ops and custom layers in followup CLs.

This CL also deletes the obsolete tablegen's reference-impl.td that was using EDSCs.

PiperOrigin-RevId: 285459545
2019-12-13 16:57:57 -08:00
River Riddle b030e4a4ec Try to fold operations in DialectConversion when trying to legalize.
This change allows for DialectConversion to attempt folding as a mechanism to legalize illegal operations. This also expands folding support in OpBuilder::createOrFold to generate new constants when folding, and also enables it to work in the context of a PatternRewriter.

PiperOrigin-RevId: 285448440
2019-12-13 16:47:26 -08:00
Prakalp Srivastava 7b19d73617 Add a type range for the XLA HLO dialect.
PiperOrigin-RevId: 285437835
2019-12-13 16:36:21 -08:00
Christian Sigg 8846557672 Fix maskAndClamp in gpu.all_reduce.
The clamp value determines the returned predicate. Previously, the clamp value was fixed to 31 and the predicate was therefore always true. This is incorrect for partial warp reductions, but went unnoticed because the returned values happened to be zero (but it could be anything).

PiperOrigin-RevId: 285343160
2019-12-13 15:28:58 -08:00
River Riddle e7aa47ff11 NFC: Cleanup the various Op::print methods.
This cleans up the implementation of the various operation print methods. This is done via a combination of code cleanup, adding new streaming methods to the printer(e.g. operand ranges), etc.

PiperOrigin-RevId: 285285181
2019-12-12 15:32:21 -08:00
Jacques Pienaar a50cb184a0 Fix logic on when to emit collective type but separate arg builder
Got the comment right but the code wrong :/

PiperOrigin-RevId: 285270561
2019-12-12 14:23:14 -08:00
Aart Bik 1c81adf362 [VectorOps] Add lowering of vector.shuffle to LLVM IR
For example, a shuffle

%1 = vector.shuffle %arg0, %arg1 [0 : i32, 1 : i32] : vector<2xf32>, vector<2xf32>

becomes a direct LLVM shuffle

0 = llvm.shufflevector %arg0, %arg1 [0 : i32, 1 : i32] : !llvm<"<2 x float>">, !llvm<"<2 x float>">

but

%1 = vector.shuffle %a, %b[1 : i32, 0 : i32, 2: i32] : vector<1x4xf32>, vector<2x4xf32>

becomes the more elaborate (note the index permutation that drives
argument selection for the extract operations)

%0 = llvm.mlir.undef : !llvm<"[3 x <4 x float>]">
%1 = llvm.extractvalue %arg1[0] : !llvm<"[2 x <4 x float>]">
%2 = llvm.insertvalue %1, %0[0] : !llvm<"[3 x <4 x float>]">
%3 = llvm.extractvalue %arg0[0] : !llvm<"[1 x <4 x float>]">
%4 = llvm.insertvalue %3, %2[1] : !llvm<"[3 x <4 x float>]">
%5 = llvm.extractvalue %arg1[1] : !llvm<"[2 x <4 x float>]">
%6 = llvm.insertvalue %5, %4[2] : !llvm<"[3 x <4 x float>]">

PiperOrigin-RevId: 285268164
2019-12-12 14:11:56 -08:00
Jacques Pienaar 41a73ddce8 Add type inference variant for separate params builder generated
Add variant that does invoke infer type op interface where defined. Also add entry function that invokes that different separate argument builders for wrapped, unwrapped and inference variant.

PiperOrigin-RevId: 285220709
2019-12-12 10:36:14 -08:00
Nicolas Vasilache 782ae29678 Retire !linalg.buffer type - NFC
This type is not used anymore now that Linalg view and subview have graduated to std and that alignment is supported on alloc.

PiperOrigin-RevId: 285213424
2019-12-12 10:03:57 -08:00
Alexander Belyaev 1b579d998a [Linalg] Add test for fusion of GenericOp with IndexedGenericOp.
PiperOrigin-RevId: 285211797
2019-12-12 09:56:45 -08:00
Ehsan Toosi f7bffad5a7 Added lowering of `std.tanh` to llvm function call to `tanh` and `tanhf`.
Closes tensorflow/mlir#312

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/312 from dfki-ehna:tanh 9e89b072ff91ff390ad739501745114feb3ac856
PiperOrigin-RevId: 285205674
2019-12-12 09:25:15 -08:00
Nicolas Vasilache 95b5a4fd67 Move cpu runner utils templates to .h
This allows reusing the implementation in various places by just including and permits more easily writing test functions without explicit template instantiations.

This also modifies UnrankedMemRefType to take a template type parameter since it cannot be type agnostic atm.

PiperOrigin-RevId: 285187711
2019-12-12 07:33:09 -08:00
Christian Sigg 9b85582682 Automated rollback of commit f68ac464d8
PiperOrigin-RevId: 285162061
2019-12-12 03:48:38 -08:00
Christian Sigg f68ac464d8 Switch from shfl.bfly to shfl.down.
Both work for the current use case, but the latter allows implementing
prefix sums and is a little easier to understand for partial warps.

PiperOrigin-RevId: 285145287
2019-12-12 01:28:01 -08:00
River Riddle 851a8516d3 Make OpBuilder::insert virtual instead of OpBuilder::createOperation.
It is sometimes useful to create operations separately from the builder before insertion as it may be easier to erase them in isolation if necessary. One example use case for this is folding, as we will only want to insert newly generated constant operations on success. This has the added benefit of fixing some silent PatternRewriter failures related to cloning, as the OpBuilder 'clone' methods don't call createOperation.

PiperOrigin-RevId: 285086242
2019-12-11 16:26:45 -08:00
Nicolas Vasilache 9dfa84a269 Add std.log* and llvm.intr.log* that correspond to the LLVMIR intrinsics
PiperOrigin-RevId: 285073483
2019-12-11 15:25:34 -08:00
Mahesh Ravishankar b909299d20 Add missing CMake dependency for MLIRTestIR.
PiperOrigin-RevId: 285039153
2019-12-11 12:44:42 -08:00
Nicolas Vasilache beda0b2dc8 Fix OSS build
PiperOrigin-RevId: 285036782
2019-12-11 12:33:37 -08:00
Mahesh Ravishankar 652fc261d7 Expose a convenience function to add interface attributes to a function.
PiperOrigin-RevId: 285036647
2019-12-11 12:21:42 -08:00
Denis Khalikov d968f9696d [spirv] Add lowering for std.fdiv, std.frem, std.fsub
Closes tensorflow/mlir#313

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/313 from denis0x0D:sandbox/lowering_std_farith 41715070a74d13bfa9401957478978c1bb8006c0
PiperOrigin-RevId: 285023586
2019-12-11 11:17:35 -08:00
Nicolas Vasilache 508d4e672e Continue refactoring StructuredOps utilities
This CL adds more common information to StructuredOpsUtils.h
The n_view attribute is retired in favor of args_in + args_out but the CL is otherwise NFC.

PiperOrigin-RevId: 285000621
2019-12-11 09:27:34 -08:00
Christian Sigg c5fb4c1303 NFC: Fix naming inconsistency: FuncOpLowering -> GPUFuncOpLowering.
Remove nested anonymous namespace.

PiperOrigin-RevId: 284987357
2019-12-11 08:24:58 -08:00
Alexander Belyaev 4b0198acb5 Roll-forward initial liveness analysis including test cases.
Fix the usage of the map size when appending to the map with [].

PiperOrigin-RevId: 284985916
2019-12-11 08:13:43 -08:00
Alexander Belyaev 984fdde269 Automated rollback of commit 98fbf41044
PiperOrigin-RevId: 284979684
2019-12-11 07:17:21 -08:00
Stephan Herhut b96f86daaf Add a function to get lowering patterns from GPU to NVVM.
This enables combining the patterns with other patterns into larger lowerings.

PiperOrigin-RevId: 284979271
2019-12-11 07:14:33 -08:00
Alexander Belyaev bae8a7a724 [Linalg] Add tiling for IndexedGenericOp with a region.
PiperOrigin-RevId: 284949355
2019-12-11 02:56:40 -08:00
Marcel Koester 98fbf41044 Add initial liveness analysis including test cases.
Closes tensorflow/mlir#255

PiperOrigin-RevId: 284935454
2019-12-11 01:03:25 -08:00
Aart Bik 9826fe5c9f [VectorOps] Add lowering of vector.insert to LLVM IR
For example, an insert

  %0 = vector.insert %arg0, %arg1[3 : i32] : f32 into vector<4xf32>

becomes

  %0 = llvm.mlir.constant(3 : i32) : !llvm.i32
  %1 = llvm.insertelement %arg0, %arg1[%0 : !llvm.i32] : !llvm<"<4 x float>">

A more elaborate example, inserting an element in a higher dimension
vector

  %0 = vector.insert %arg0, %arg1[3 : i32, 7 : i32, 15 : i32] : f32 into vector<4x8x16xf32>

becomes

  %0 = llvm.extractvalue %arg1[3 : i32, 7 : i32] : !llvm<"[4 x [8 x <16 x float>]]">
  %1 = llvm.mlir.constant(15 : i32) : !llvm.i32
  %2 = llvm.insertelement %arg0, %0[%1 : !llvm.i32] : !llvm<"<16 x float>">
  %3 = llvm.insertvalue %2, %arg1[3 : i32, 7 : i32] : !llvm<"[4 x [8 x <16 x float>]]">

PiperOrigin-RevId: 284882443
2019-12-10 17:12:49 -08:00
Andy Davis 4d8ba88610 Add VectorOp transform pattern which splits vector TransferReadOps to target vector unroll size.
PiperOrigin-RevId: 284880592
2019-12-10 17:02:51 -08:00
Uday Bondhugula 36a415bcc5 More affine expr simplifications for floordiv and mod
Add one more simplification for floordiv and mod affine expressions.
Examples:
 (2*d0 + 1) floordiv 2 is simplified to d0
 (8*d0 + 4*d1 + d2) floordiv 4 simplified to 4*d0 + d1 + d2 floordiv 4.
 etc.

 Similarly, (4*d1 + 1) mod 2 is simplified to 1,
            (2*d0 + 8*d1) mod 8 simplified to 2*d0 mod 8.

Change getLargestKnownDivisor to return int64_t to be consistent and
to avoid casting at call sites (since the return value is used in expressions
of int64_t/index type).

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#202

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/202 from bondhugula:affine b13fcb2f1c00a39ca5434613a02408e085a80e77
PiperOrigin-RevId: 284866710
2019-12-10 16:00:53 -08:00
Alex Zinenko d1213ae51d Move gpu.launch_func to ODS. NFC
Move the definition of gpu.launch_func operation from hand-rolled C++
implementation to the ODS framework. Also move the documentation. This only
performs the move and remains a non-functional change, a follow-up will clean
up the custom functions that can be auto-generated using ODS.

PiperOrigin-RevId: 284842252
2019-12-10 13:55:21 -08:00
Nicolas Vasilache 995048d7b7 Fold TestLinalgTilePermutePatterns into TestLinalgTransformPatterns - NFC
Centralize all patterns that test Linalg transforms in a single pass.

PiperOrigin-RevId: 284835938
2019-12-10 13:26:15 -08:00
River Riddle 9ed22ae5b8 Refactor the various operand/result/type iterators to use indexed_accessor_range.
This has several benefits:
* The implementation is much cleaner and more efficient.
* The ranges now have support for many useful operations: operator[], slice, drop_front, size, etc.
* Value ranges can now directly query a range for their types via 'getTypes()': e.g:
   void foo(Operation::operand_range operands) {
     auto operandTypes = operands.getTypes();
   }

PiperOrigin-RevId: 284834912
2019-12-10 13:21:22 -08:00
Jose Ignacio Gomez b19fed5415 [Linalg] Add a Linalg iterator permutation transformation
This patch closes issue tensorflow/mlir#272
We add a standalone iterator permutation transformation to Linalg.
This transformation composes a permutation map with the maps in the
"indexing_maps" attribute. It also permutes "iterator_types"
accordingly.

Change-Id: I7c1e693b8203aeecc595a7c012e738ca1100c857

Closes tensorflow/mlir#307

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/307 from tetuante:issue272 f7908d58792f4111119721885e247045104f1131
PiperOrigin-RevId: 284824102
2019-12-10 12:25:43 -08:00
Nicolas Vasilache ad38e49806 Uniformize Vector transforms as patterns on the model of Linalg - NFC
This reorganizes the vector transformations to be more easily testable as patterns and more easily composable into fused passes in the future.

PiperOrigin-RevId: 284817474
2019-12-10 11:54:33 -08:00
MLIR Team 8ccb350979 Add Py API for composing an affine expression with a map. Also allows extracting constant values for const expressions.
PiperOrigin-RevId: 284809623
2019-12-10 11:30:35 -08:00
Mahesh Ravishankar 04fdd33daf More convenience build methods for SPIR-V ops.
Add some convenience build methods to SPIR-V ops and update the
lowering to use these methods where possible.

For SPIRV::CompositeExtractOp move the method to deduce type of
element based on base and indices into a convenience function. Some
additional functionality needed to handle differences between parsing
and verification methods.

PiperOrigin-RevId: 284794404
2019-12-10 10:11:50 -08:00
Mehdi Amini 90b72dd616 Add a doc on guidelines for contributing a new dialect to the MLIR core repo
Closes tensorflow/mlir#263

PiperOrigin-RevId: 284760931
2019-12-10 07:01:51 -08:00
Alex Zinenko ac4873322f Drop Markdown style annotations
These come from a non-standard extenion that is not available on Github, so it
only clutters the documentation source with {.mlir} or {.ebnf} tags.

PiperOrigin-RevId: 284733003
2019-12-10 03:00:57 -08:00
Jacques Pienaar acb23ff48d Fix build breakage on gcc-5
Avoid `error: could not convert ?(const char*)"reduction"? from ?const char*? to ?llvm::StringLiteral?`. Tested with gcc-5.5.

PiperOrigin-RevId: 284677810
2019-12-09 18:19:07 -08:00
Aart Bik 1fe65688d4 [VectorOps] Add a ShuffleOp to the VectorOps dialect
For example

 %0 = vector.shuffle %x, %y [3 : i32, 2 : i32, 1 : i32, 0 : i32] : vector<2xf32>, vector<2xf32>

yields a vector<4xf32> result with a permutation of the elements of %x and %y

PiperOrigin-RevId: 284657191
2019-12-09 16:15:41 -08:00
Aart Bik 0e963b9c42 [VectorOps] Fix off-by-one error in insert/extract validation
PiperOrigin-RevId: 284652653
2019-12-09 15:54:23 -08:00
River Riddle 3f9744a6b7 Refactor the Block support classes.
Each of the support classes for Block are now moved into a new header BlockSupport.h. The successor iterator class is also reimplemented as an indexed_accessor_range. This makes the class more efficient, and expands on its available functionality.

PiperOrigin-RevId: 284646792
2019-12-09 15:24:43 -08:00
River Riddle 7be6a40ab9 Add new indexed_accessor_range_base and indexed_accessor_range classes that simplify defining index-able ranges.
Many ranges want similar functionality from a range type(e.g. slice/drop_front/operator[]/etc.), so these classes provide a generic implementation that may be used by many different types of ranges. This removes some code duplication, and also empowers many of the existing range types in MLIR(e.g. result type ranges, operand ranges, ElementsAttr ranges, etc.). This change only updates RegionRange and ValueRange, more ranges will be updated in followup commits.

PiperOrigin-RevId: 284615679
2019-12-09 12:55:40 -08:00
shanshanpt 56da74476c Fix minor spelling tweaks.
Closes tensorflow/mlir#306

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/306 from shanshanpt:master 11430c2131281d84a432f45e854e29917b336e8d
PiperOrigin-RevId: 284613648
2019-12-09 12:45:20 -08:00
Denis Khalikov 34265dad65 [spirv] Add CompositeConstruct operation.
Closes tensorflow/mlir#308

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/308 from denis0x0D:sandbox/composite_construct 9ef7180f77f9374bcd05afc4f9e6c1d2d72d02b7
PiperOrigin-RevId: 284613617
2019-12-09 12:43:53 -08:00
Lei Zhang 2c7e8ed7c6 [spirv] Add spv.IAdd, spv.ISub, and spv.IMul folders
The patterns to be folded away can be commonly generated
during lowering to SPIR-V.

PiperOrigin-RevId: 284604855
2019-12-09 11:59:10 -08:00
Nicolas Vasilache 5a48e40a65 Factor out commonly reusable names across structured ops dialects
This CL starts extracting commonalities between dialects that use the structured ops abstractions. Also fixes an OSS build issue where StringRef were incorrectly used with constexpr.

PiperOrigin-RevId: 284591114
2019-12-09 11:01:40 -08:00
Jacques Pienaar 89cef725f4 ODS: Generate named accessors for raw attributes
Currently named accessors are generated for attributes returning a consumer
friendly type. But sometimes the attributes are used while transforming an
existing op and then the returned type has to be converted back into an
attribute or the raw `getAttr` needs to be used. Generate raw named accessor
for attributes to reference the raw attributes without having to use the string
interface for better compile time verification. This allows calling
`blahAttr()` instead of `getAttr("blah")`.

Raw here refers to returning the underlying storage attribute.

PiperOrigin-RevId: 284583426
2019-12-09 10:29:34 -08:00
Mahesh Ravishankar 4a62019eb8 Add lowering for module with gpu.kernel_module attribute.
The existing GPU to SPIR-V lowering created a spv.module for every
function with gpu.kernel attribute. A better approach is to lower the
module that the function lives in (which has the attribute
gpu.kernel_module) to a spv.module operation. This better captures the
host-device separation modeled by GPU dialect and simplifies the
lowering as well.

PiperOrigin-RevId: 284574688
2019-12-09 09:52:21 -08:00
Andy Davis 312ccb1c0f Unify vector op unrolling transformation.
Unifies vector op unrolling transformation, by using the same unrolling implementation for contraction and elementwise operations.
Removes fakefork/join operations which are non longer needed now that we have the InsertStridedSlice operation.

PiperOrigin-RevId: 284570784
2019-12-09 09:35:15 -08:00
Kazuaki Ishizaki ae05cf27c6 Minor spelling tweaks
Closes tensorflow/mlir#304

PiperOrigin-RevId: 284568358
2019-12-09 09:23:48 -08:00
Nicolas Vasilache 91c0074624 [StructuredOps][Linalg] Add a primitive pattern to rewrite the linalg.generic form of matmul to vector form.
This CL uses the newly expanded matcher support to easily detect when a linalg.generic has a multiply-accumulate body. A linalg.generic with such a body is rewritten as a vector contraction.
This CL additionally limits the rewrite to the case of matrix multiplication on contiguous and statically shaped memrefs for now.

Before expanding further, we should harden the infrastructure for expressing custom ops with the structured ops abstraction.

PiperOrigin-RevId: 284566659
2019-12-09 09:14:39 -08:00
Jacques Pienaar 70aeb4566e Add RegionRange for when need to abstract over different region iteration
Follows ValueRange in representing a generic abstraction over the different
ways to represent a range of Regions. This wrapper is not as ValueRange and only
considers the current cases of interest: MutableArrayRef<Region> and
ArrayRef<std::unique_ptr<Region>> as occurs during op construction vs op region
querying.

Note: ArrayRef<std::unique_ptr<Region>> allows for unset regions, so this range
returns a pointer to a Region instead of a Region.
PiperOrigin-RevId: 284563229
2019-12-09 08:57:56 -08:00
Nicolas Vasilache 7b19bd5411 Post-submit cleanups in RecursiveMatchers
This CL addresses leftover cleanups and adds a test mixing RecursiveMatchers and m_Constant
that captures properly.

PiperOrigin-RevId: 284551567
2019-12-09 07:47:35 -08:00
Uday Bondhugula a63f6e0bf9 Replace spurious SmallVector constructions with ValueRange
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#305

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/305 from bondhugula:value_range 21d1fae73f549e3c8e72b60876eff1b864cea39c
PiperOrigin-RevId: 284541027
2019-12-09 06:26:33 -08:00
Nicolas Vasilache ade58a268c Add a layer of recursive matchers that compose.
This CL adds support for building matchers recursively.
The following matchers are provided:

1. `m_any()` can match any value
2. `m_val(Value *)` binds to a value and must match it
3. `RecursivePatternMatcher<OpType, Matchers...>` n-arity pattern that matches `OpType` and whose operands must be matched exactly by `Matchers...`.

This allows building expression templates for patterns, declaratively, in a very natural fashion.
For example pattern `p9` defined as follows:
```
  auto mul_of_muladd = m_Op<MulFOp>(m_Op<MulFOp>(), m_Op<AddFOp>());
  auto mul_of_anyadd = m_Op<MulFOp>(m_any(), m_Op<AddFOp>());
  auto p9 = m_Op<MulFOp>(m_Op<MulFOp>(
                     mul_of_muladd, m_Op<MulFOp>()),
                   m_Op<MulFOp>(mul_of_anyadd, mul_of_anyadd));
```

Successfully matches `%6` in:
```
  %0 = addf %a, %b: f32
  %1 = addf %a, %c: f32 // matched
  %2 = addf %c, %b: f32
  %3 = mulf %a, %2: f32 // matched
  %4 = mulf %3, %1: f32 // matched
  %5 = mulf %4, %4: f32 // matched
  %6 = mulf %5, %5: f32 // matched
```

Note that 0-ary matchers can be used as leaves in place of n-ary matchers. This alleviates from passing explicit `m_any()` leaves.

In the future, we may add extra patterns to specify that operands may be matched in any order.

PiperOrigin-RevId: 284469446
2019-12-08 18:09:40 -08:00
Lei Zhang 9a4c2df480 NFC: Expose constFoldBinaryOp via a header
This allows other dialects to reuse the logic to support constant
folding binary operations and reduces code duplication.

PiperOrigin-RevId: 284428721
2019-12-08 06:25:54 -08:00
River Riddle d6ee6a0310 Update the builder API to take ValueRange instead of ArrayRef<Value *>
This allows for users to provide operand_range and result_range in builder.create<> calls, instead of requiring an explicit copy into a separate data structure like SmallVector/std::vector.

PiperOrigin-RevId: 284360710
2019-12-07 10:35:41 -08:00
River Riddle 9d1a0c72b4 Add a new ValueRange class.
This class represents a generic abstraction over the different ways to represent a range of Values: ArrayRef<Value *>, operand_range, result_range. This class will allow for removing the many instances of explicit SmallVector<Value *, N> construction. It has the same memory cost as ArrayRef, and only suffers cost from indexing(if+elsing the different underlying representations).

This change only updates a few of the existing usages, with more to be changed in followups; e.g. 'build' API.

PiperOrigin-RevId: 284307996
2019-12-06 20:07:23 -08:00
Nicolas Vasilache d27bc1db6a Improve Linalg documentation following the Structured Ops presentation.
PiperOrigin-RevId: 284291653
2019-12-06 17:09:16 -08:00
River Riddle 8904e91035 Add a flag to the IRPrinter instrumentation to only print after a pass if there is a change to the IR.
This adds an additional filtering mode for printing after a pass that checks to see if the pass actually changed the IR before printing it. This "change" detection is implemented using a SHA1 hash of the current operation and its children.

PiperOrigin-RevId: 284291089
2019-12-06 17:05:05 -08:00
Uday Bondhugula ca23bd78d4 NFC - update doc, comments, vim syntax file
- for the symbol rules, the code was updated but the doc wasn't.

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#284

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/284 from bondhugula:doc 9aad8b8a715559f7ce61265f3da3f8a3c11b45ea
PiperOrigin-RevId: 284283712
2019-12-06 16:17:06 -08:00
nmostafa fcc215e399 Fix langref code snippet - NFC
Closes tensorflow/mlir#294

PiperOrigin-RevId: 284281172
2019-12-06 16:03:51 -08:00
Mahesh Ravishankar 6500b7e0c0 NFC: Separate implementation and definition in ConvertStandardToSPIRV.cpp
PiperOrigin-RevId: 284274326
2019-12-06 15:26:17 -08:00
Jacques Pienaar 4add9edd72 Change inferReturnTypes to return LogicalResult and values
Previously the error case was using a sentinel in the error case which was bad. Also make the one `build` invoke the other `build` to reuse verification there.

And follow up on suggestion to use formatv which I missed during previous review.

PiperOrigin-RevId: 284265762
2019-12-06 14:42:45 -08:00
Alex Zinenko e96150eb46 Replace custom getBody method with an ODS-generated in gpu::LaunchOp
PiperOrigin-RevId: 284262981
2019-12-06 14:29:25 -08:00
Mahesh Ravishankar 883f555726 During serialization do a walk of ops in module to find spv.module.
During lowering, spv.module might be within other modules (for example
gpu kernel module). Walk the module op to find spirv module to
serialize.

PiperOrigin-RevId: 284262550
2019-12-06 14:27:03 -08:00
Alex Zinenko 3230267d0d Move GPU::LaunchOp to ODS. NFC.
Move the definition of the GPU launch opreation from hand-rolled C++ code to
ODS framework. This only does the moves, a follow-up is necessary to clean up
users of custom functions that could be auto-generated by ODS.

PiperOrigin-RevId: 284261856
2019-12-06 14:23:37 -08:00
Alex Zinenko 6e0a2e4e2f Use named traits in the ODS definition of LLVMFuncOp
The "FunctionLike" and "IsIsolatedFromAbove" op traits are now defined as named
records in base ODS file. Use those instead of NativeOpTrait referring to the
C++ class name in the ODS definition of LLVMFuncOp. NFC.

PiperOrigin-RevId: 284260891
2019-12-06 14:18:37 -08:00
Aart Bik d37f27251f [VecOps] Rename vector.[insert|extract]element to just vector.[insert|extract]
Since these operations lower to [insert|extract][element|value] at LLVM
dialect level, neither element nor value would correctly reflect the meaning.

PiperOrigin-RevId: 284240727
2019-12-06 12:39:25 -08:00
Alex Zinenko be3ed14658 LLVM::GlobalOp: take address space as builder argument
Accept the address space of the global as a builder argument when constructing
an LLVM::GlobalOp instance. This decreases the reliance of LLVM::GlobalOp users
on the internal name of the attribute used for this purpose. Update several
uses of the address space in GPU to NVVM conversion.

PiperOrigin-RevId: 284233254
2019-12-06 12:01:46 -08:00
Alex Zinenko ccc767d63b Move GPU::FuncOp definition to ODS - NFC
Move the definition of the GPU function opreation from hand-rolled C++ code to
ODS framework. This only does the moves, a follow-up is necessary to clean up
users of custom functions that could be auto-generated by ODS.

PiperOrigin-RevId: 284233245
2019-12-06 12:00:32 -08:00
MLIR Team 9ef9e23682 Provide a way to get the type of a ValueHandle.
PiperOrigin-RevId: 284221337
2019-12-06 11:07:10 -08:00
Aart Bik b36aaeafb1 [VectorOps] Add lowering of vector.broadcast to LLVM IR
For example, a scalar broadcast

    %0 = vector.broadcast %x : f32 to vector<2xf32>
    return %0 : vector<2xf32>

which expands scalar x into vector [x,x] by lowering
to the following LLVM IR dialect to implement the
duplication over the leading dimension.

    %0 = llvm.mlir.undef : !llvm<"<2 x float>">
    %1 = llvm.mlir.constant(0 : index) : !llvm.i64
    %2 = llvm.insertelement %x, %0[%1 : !llvm.i64] : !llvm<"<2 x float>">
    %3 = llvm.shufflevector %2, %0 [0 : i32, 0 : i32] : !llvm<"<2 x float>">, !llvm<"<2 x float>">
    return %3 : vector<2xf32>

In the trailing dimensions, the operand is simply
"passed through", unless a more elaborate "stretch"
is required.

For example

    %0 = vector.broadcast %arg0 : vector<1xf32> to vector<4xf32>
    return %0 : vector<4xf32>

becomes

    %0 = llvm.mlir.undef : !llvm<"<4 x float>">
    %1 = llvm.mlir.constant(0 : index) : !llvm.i64
    %2 = llvm.extractelement %arg0[%1 : !llvm.i64] : !llvm<"<1 x float>">
    %3 = llvm.mlir.constant(0 : index) : !llvm.i64
    %4 = llvm.insertelement %2, %0[%3 : !llvm.i64] : !llvm<"<4 x float>">
    %5 = llvm.shufflevector %4, %0 [0 : i32, 0 : i32, 0 : i32, 0 : i32] : !llvm<"<4 x float>">, !llvm<"<4 x float>">
    llvm.return %5 : !llvm<"<4 x float>">

PiperOrigin-RevId: 284219926
2019-12-06 11:02:29 -08:00
Jacques Pienaar 398f04aa49 Generate builder for ops that use InferTypeOpInterface trait in ODS
For ops with infer type op interface defined, generate version that calls the inferal method on build. This is intermediate step to removing special casing of SameOperandsAndResultType & FirstAttrDereivedResultType. After that would be generating the inference code, with the initial focus on shaped container types. In between I plan to refactor these a bit to reuse generated paths. The intention would not be to add the type inference trait in multiple places, but rather to take advantage of the current modelling in ODS where possible to emit it instead.

Switch the `inferReturnTypes` method to be static.

Skipping ops with regions here as I don't like the Region vs unique_ptr<Region> difference at the moment, and I want the infer return type trait to be useful for verification too. So instead, just skip it for now to avoid churn.

PiperOrigin-RevId: 284217913
2019-12-06 10:53:06 -08:00
Alex Zinenko e216a72ab8 Add conversions of GPU func with memory attributions to LLVM/NVVM
GPU functions use memory attributions, a combination of Op attributes and
region arguments, to specify function-wide buffers placed in workgroup or
private memory spaces. Introduce a lowering pattern for GPU functions to be
converted to LLVM functions taking into account memory attributions. Workgroup
attributions get transformed into module-level globals with unique names
derived from function names. Private attributions get converted into
llvm.allocas inside the function body. In both cases, we inject at the
beginning of the function the IR that obtains the raw pointer to the data and
populates a MemRef descriptor based on the MemRef type of buffer, making
attributions compose with the rest of the MemRef lowering and transparent for
use with std.load and std.store. While using raw pointers instead of
descriptors might have been more efficient, it is better implemented as a
canonicalization or a separate transformation so that non-attribution memrefs
could also benefit from it.

PiperOrigin-RevId: 284208396
2019-12-06 10:08:43 -08:00
Alexandre E. Eichenberger 3c69ca1e69 fix examples in comments
Closes tensorflow/mlir#301

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/301 from AlexandreEichenberger:vect-doc-update 7e5418a9101a4bdad2357882fe660b02bba8bd01
PiperOrigin-RevId: 284202462
2019-12-06 09:40:50 -08:00
River Riddle 79047e1ab5 Use regex to fix failure when stats are disabled.
It would be nice if we could detect if stats were enabled or not and use 'Requires', but this isn't possible to do at configure time.

Fixes tensorflow/mlir#296

PiperOrigin-RevId: 284200271
2019-12-06 09:29:14 -08:00
Andy Davis 41f8e105fa Unroll vector masks along with their associated vector arguments.
Updates vector ContractionOp to use proper vector masks (produced by CreateMaskOp/ConstantMaskOp).
Leverages the following canonicalizations in unrolling unit test: CreateMaskOp -> ConstantMaskOp, StridedSliceOp(ConstantMaskOp) -> ConstantMaskOp
Removes IndexTupleOp (no longer needed now that we have vector mask ops).
Updates all unit tests.

PiperOrigin-RevId: 284182168
2019-12-06 07:37:28 -08:00
Denis Khalikov 9ca53130f3 [spirv] Reorder `erase` and `emplace` to avoid "invalid iterator access".
The iterator should be erased before adding a new entry
into blockMergeInfo to avoid iterator invalidation.

Closes tensorflow/mlir#299

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/299 from denis0x0D:sandbox/reoder_erase 983be565809aa0aadfc7e92962e4d4b282f63c66
PiperOrigin-RevId: 284173235
2019-12-06 06:26:56 -08:00
Uday Bondhugula 3ade6a7d15 DimOp folding for alloc/view dynamic dimensions
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#253

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/253 from bondhugula:dimop a4b464f24ae63fd259114558d87e11b8ee4dae86
PiperOrigin-RevId: 284169689
2019-12-06 06:00:54 -08:00
Kazuaki Ishizaki 84a6182ddd minor spelling tweaks
Closes tensorflow/mlir#290

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/290 from kiszk:spelling_tweaks_201912 9d9afd16a723dd65754a04698b3976f150a6054a
PiperOrigin-RevId: 284169681
2019-12-06 05:59:30 -08:00
Alex Zinenko 58adf99ed1 LLVM::AddressOfOp: properly take into account the address space
The AddressOf operation in the LLVM dialect return a pointer to a global
variable. The latter may be in a non-default address space as indicated by the
"addr_space" attribute. Check that the address space of the pointer returned by
AddressOfOp matches that of the referenced GlobalOp. Update the AddressOfOp
builder to respect this constraint.

PiperOrigin-RevId: 284138860
2019-12-06 01:09:13 -08:00
River Riddle 12e57cf6c0 NFC: Add documentation for `-mlir-print-op-on-diagnostic` and `-mlir-print-stacktrace-on-diagnostic`.
This change adds proper documentation in Diagnostics.md, allowing for users to more easily find them.

PiperOrigin-RevId: 284092336
2019-12-05 17:47:03 -08:00
River Riddle 71999ff7f2 Add include path to the TestDialect to fix broken build.
PiperOrigin-RevId: 284067891
2019-12-05 15:33:33 -08:00
Jose Ignacio Gomez f60bbb6c3b [Linalg] Add permutation information to tiling
This patch closes issue tensorflow/mlir#271.
It adds an optional permutation map to declarative tiling transformations.
The map is expressed as a list of integers.

Closes tensorflow/mlir#288

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/288 from tetuante:issue271 2df2938d6a1f01b3bc404ded08dea2dd1e10b588
PiperOrigin-RevId: 284064151
2019-12-05 15:14:59 -08:00
River Riddle da53000fb4 Refactor the IRPrinting instrumentation to take a derivable config.
This allows for more interesting behavior from users, e.g. enabling the ability to dump the IR to a separate file for each pass invocation.

PiperOrigin-RevId: 284059447
2019-12-05 14:53:01 -08:00
nmostafa daff60cd68 Add UnrankedMemRef Type
Closes tensorflow/mlir#261

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/261 from nmostafa:nmostafa/unranked 96b6e918f6ed64496f7573b2db33c0b02658ca45
PiperOrigin-RevId: 284037040
2019-12-05 13:13:20 -08:00
Denis Khalikov e67acfa468 [spirv] Add CompositeInsertOp operation
A CompositeInsertOp operation make a copy of a composite object,
while modifying one part of it.

Closes tensorflow/mlir#292

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/292 from denis0x0D:sandbox/composite_insert 2200962b9057bda53cd2f2866b461e2797196380
PiperOrigin-RevId: 284036551
2019-12-05 13:10:44 -08:00
River Riddle 33a64540ad Add support for instance specific pass statistics.
Statistics are a way to keep track of what the compiler is doing and how effective various optimizations are. It is useful to see what optimizations are contributing to making a particular program run faster. Pass-instance specific statistics take this even further as you can see the effect of placing a particular pass at specific places within the pass pipeline, e.g. they could help answer questions like "what happens if I run CSE again here".

Statistics can be added to a pass by simply adding members of type 'Pass::Statistics'. This class takes as a constructor arguments: the parent pass pointer, a name, and a description. Statistics can be dumped by the pass manager in a similar manner to how pass timing information is dumped, i.e. via PassManager::enableStatistics programmatically; or -pass-statistics and -pass-statistics-display via the command line pass manager options.

Below is an example:

struct MyPass : public OperationPass<MyPass> {
  Statistic testStat{this, "testStat", "A test statistic"};

  void runOnOperation() {
    ...
    ++testStat;
    ...
  }
};

$ mlir-opt -pass-pipeline='func(my-pass,my-pass)' foo.mlir -pass-statistics

Pipeline Display:
===-------------------------------------------------------------------------===
                         ... Pass statistics report ...
===-------------------------------------------------------------------------===
'func' Pipeline
  MyPass
    (S) 15 testStat - A test statistic
  MyPass
    (S)  6 testStat - A test statistic

List Display:
===-------------------------------------------------------------------------===
                         ... Pass statistics report ...
===-------------------------------------------------------------------------===
MyPass
  (S) 21 testStat - A test statistic

PiperOrigin-RevId: 284022014
2019-12-05 11:53:28 -08:00
Mahesh Ravishankar 4d61a79db4 Allow specification of the workgroup size for GPUToSPIRV lowering.
SPIR-V/Vulkan spec requires the workgroups size to be specified with
the spv.ExecutionMode operation. This was hard-wired to be set to a
particular value. It is now changed to be configurable by clients of
the pass or of the patterns that implement the lowering from GPU to
SPIRV.

PiperOrigin-RevId: 284017482
2019-12-05 11:31:57 -08:00
Lei Zhang 037044b0ae Add spv.AtomicCompareExchangeWeak
PiperOrigin-RevId: 283997917
2019-12-05 10:06:24 -08:00
River Riddle 780f0c043a Add a flag to dump the current stack trace when emitting a diagnostic.
It is often desirable to know where within the program that a diagnostic was emitted, without reverting to assert/unreachable which crash the program. This change adds a flag `mlir-print-stacktrace-on-diagnostic` that attaches the current stack trace as a note to every diagnostic that gets emitted.

PiperOrigin-RevId: 283996373
2019-12-05 10:00:25 -08:00
Lei Zhang c0a9de29ad [spirv] Fix nested loop (de)serialization
For serialization, when we have nested ops, the inner loop will create multiple
SPIR-V blocks. If the outer loop has block arguments (which corresponds to
OpPhi instructions), we defer the handling of OpPhi's parent block handling
until we serialized all blocks and then fix it up with the result <id>. These two
cases happening together was generating invalid SPIR-V blob because we
previously assume the parent block to be the block containing the terminator.
That is not true anymore when the block contains structured control flow ops.
If that happens, it should be fixed to use the structured control flow op's
merge block.

For deserialization, we record a map from header blocks to their corresponding
merge and continue blocks during the initial deserialization and then use the
info to construct spv.selection/spv.loop. The existing implementation will also
fall apart when we have nested loops. If so, we clone all blocks for the outer
loop, including the ones for the inner loop, to the spv.loop's region. So the map
for header blocks' merge info need to be updated; otherwise we are operating
on already deleted blocks.

PiperOrigin-RevId: 283949230
2019-12-05 04:39:37 -08:00
Mehdi Amini b14ee5a9a1 Fix MLIR Build after LLVM upstream JIT changes (getMainJITDylib removed)
The getMainJITDylib() method was removed in 4fc68b9b7f, replace it by creating a JITDylib on the fly.

PiperOrigin-RevId: 283948595
2019-12-05 04:32:46 -08:00
Tres Popp b8cd0c1486 Move ModuleManager functionality into mlir::SymbolTable.
Note for broken code, the following transformations occurred:
ModuleManager::insert(Block::iterator, Operation*) - > SymbolTable::insert(Operation*, Block::iterator)
ModuleManager::lookupSymbol -> SymbolTable::lookup
ModuleManager::getModule() -> SymbolTable::getOp()
ModuleManager::getContext() -> SymbolTable::getOp()->getContext()
ModuleManager::* -> SymbolTable::*
PiperOrigin-RevId: 283944635
2019-12-05 03:56:46 -08:00
Lei Zhang b60799b71b Add MLIRIR as a dependency to LLVM and related dialects
Fixes tensorflow/mlir#289

PiperOrigin-RevId: 283914472
2019-12-04 23:45:35 -08:00
River Riddle d9da8b647a Optimize operation ordering to support non-congruent indices.
This change adds support for non-congruent indices in the operation ordering within a basic block. This effect of this is that insertions are less likely to cause an invalidation of the ordering within a block. This has a big effect on modules that have very large basic blocks.

PiperOrigin-RevId: 283858136
2019-12-04 16:10:13 -08:00
River Riddle 2c930f8d9d Add emitOptional(Error|Warning|Remark) functions to simplify emission with an optional location.
In some situations a diagnostic may optionally be emitted by the presence of a location, e.g. attribute and type verification. These situations currently require extra 'if(loc) emitError(...); return failure()' wrappers that make verification clunky. These new overloads take an optional location and a list of arguments to the diagnostic, and return a LogicalResult. We take the arguments directly and return LogicalResult instead of returning InFlightDiagnostic because we cannot create a valid diagnostic with a null location. This creates an awkward situation where a user may try to treat the, potentially null, diagnostic as a valid one and encounter crashes when attaching notes/etc. Below is an example of how these methods simplify some existing usages:

Before:

  if (loc)
    emitError(*loc, "this is my diagnostic with argument: ") << 5;
  return failure();

After:

  return emitOptionalError(loc, "this is my diagnostic with argument: ", 5);

PiperOrigin-RevId: 283853599
2019-12-04 15:49:42 -08:00
Nicolas Vasilache b3f7cf80a7 Add a CL option to Standard to LLVM lowering to use alloca instead of malloc/free.
In the future, a more configurable malloc and free interface should be used and exposed via
extra parameters to the `createLowerToLLVMPass`. Until requirements are gathered, a simple CL flag allows generating code that runs successfully on hardware that cannot use the stdlib.

PiperOrigin-RevId: 283833424
2019-12-04 14:16:00 -08:00
Andy Davis d20d763241 Add canonicalization patterns for vector CreateMaskOp and StridedSliceOp to be used in the unroll vector op transformation.
Adds a ConstantMaskOp to the vector ops dialect.
Adds the following canonicalization patterns:
CreateMaskOp -> ConstantMaskOp
StridedSliceOp(ConstantMaskOp) -> ConstantMaskOp

PiperOrigin-RevId: 283816752
2019-12-04 13:00:43 -08:00
River Riddle 6f895bec7d [CSE] NFC: Hash the attribute dictionary pointer instead of the list of attributes.
PiperOrigin-RevId: 283810829
2019-12-04 12:32:08 -08:00
Nicolas Vasilache edfaf925cf Drop MaterializeVectorTransfers in favor of simpler declarative unrolling
Now that we have unrolling as a declarative pattern, we can drop a full pass that has gone stale. In the future we may want to add specific unrolling patterns for VectorTransferReadOp.

PiperOrigin-RevId: 283806880
2019-12-04 12:11:42 -08:00
River Riddle 31b3e2248b NFC: Fix mismatches between LangRef.md and actual parser implementation.
PiperOrigin-RevId: 283805832
2019-12-04 12:06:24 -08:00
Lei Zhang 1221918b85 [spirv] Define a few more extensions in SPIRVBase.td
PiperOrigin-RevId: 283798496
2019-12-04 11:34:36 -08:00
Sean Silva 26484bc0b6 Print out large elementsattr's such that they are parseable.
I found that when running crash reproducers, the elided elementsattr's
would prevent parsing the IR repro. I found myself manually going and
replacing the "..." with some valid IR.

With this change, we now print elided attrs as `opaque<"", "0xDEADBEEF">`
to clearly delineate them as being elided while still being parseable.

PiperOrigin-RevId: 283781806
2019-12-04 10:19:54 -08:00
Uday Bondhugula 0827fa562d NFC - fix name / comments - isAccessInvariant
- the name was misleading; this is really checking if a Value being used
  to index was loop IV invariant. Update comment.

- the method is only used locally; what can be exposed in the future is
  isAccessInvariant(LoadOrStoreOp op, Value *iv)

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#285

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/285 from bondhugula:quickfix fe5837abe987980c4ab469a9aa7de8e4f0007d9f
PiperOrigin-RevId: 283771923
2019-12-04 09:30:22 -08:00
Scott Todd bf45ff6aab [spirv] Adding sqrt op in the GLSL extension.
PiperOrigin-RevId: 283769736
2019-12-04 09:16:23 -08:00
Alex Zinenko 75175134d4 Loop coalescing: fix pointer chainsing in use-chain traversal
In the replaceAllUsesExcept utility function called from loop coalescing the
iteration over the use-chain is incorrect. The use list nodes (IROperands) have
next/prev links, and bluntly resetting the use would make the loop to continue
on uses of the value that was replaced instead of the original one. As a
result, it could miss the existing uses and update the wrong ones. Make sure we
increment the iterator before updating the use in the loop body.

Reported-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#291.

PiperOrigin-RevId: 283754195
2019-12-04 07:42:29 -08:00
Julian Gross f7c6bc70a9 Added new FAbs, FCeil, Cos, Neg, Sign, Tanh operations.
Closes tensorflow/mlir#251

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/251 from dfki-jugr:new_ops 0398997bf9953016898f873068e22916a062eb2b
PiperOrigin-RevId: 283750699
2019-12-04 07:17:30 -08:00
Andy Davis 34e1f4aa51 Adds support for unrolling single-result vector operations with iterator type lists and indexing maps to a target vector size.
Adds unit tests for unrolling the vector ContractionOp with different iteration orders.

PiperOrigin-RevId: 283747503
2019-12-04 06:53:37 -08:00
Kazuaki Ishizaki c8c36e7979 minor spelling tweaks
Closes tensorflow/mlir#250

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/250 from kiszk:spelling_tweaks_201911 50fc04443723190b764e824b6fcd2469fecb56e6
PiperOrigin-RevId: 283733032
2019-12-04 04:59:11 -08:00
Smit Hinsu da0b0b1a0e Avoid variable name conflict in MLIR tutorial code snippet
PiperOrigin-RevId: 283682865
2019-12-03 21:25:30 -08:00
Nicolas Vasilache 5c0c51a997 Refactor dependencies to expose Vector transformations as patterns - NFC
This CL refactors some of the MLIR vector dependencies to allow decoupling VectorOps, vector analysis, vector transformations and vector conversions from each other.
This makes the system more modular and allows extracting VectorToVector into VectorTransforms that do not depend on vector conversions.

This refactoring exhibited a bunch of cyclic library dependencies that have been cleaned up.

PiperOrigin-RevId: 283660308
2019-12-03 17:52:10 -08:00
Lei Zhang 50b2b26e70 [spirv] Add spv.GroupNonUniformBallot
This CL also did the following cleanup:
- Moved the test for spv.SubgroupBallotKHR to its own file
- Wrapped generated canonicalization patterns in anonymous namespace
- Updated header comments in SPVOps.td

PiperOrigin-RevId: 283650091
2019-12-03 16:44:09 -08:00
Mahesh Ravishankar c5ba37b6ae Add a pass to legalize operations before lowering to SPIR-V.
Not all StandardOps can be lowered to SPIR-V. For example, subview op
implementation requires use of pointer bitcasts which is not valid
according to SPIR-V spec (or at least is ambiguous about it). Such ops
need to be removed/transformed before lowering to SPIR-V. The
SPIRVLegalizationPass is added a place where such legalizations can be
added. Current implementation folds the subview ops with load/stores
so that the lowering itself does not have to convert a subview op.

PiperOrigin-RevId: 283642981
2019-12-03 16:06:17 -08:00
Sean Silva 82f9f9d112 Make diagnostic a bit clearer.
This prints out in case of any pass failure. Not just a crash.

PiperOrigin-RevId: 283616719
2019-12-03 14:01:25 -08:00
Andy Davis 2c13fd9f17 Add CreateMaskOp to the VectorOps dialect.
PiperOrigin-RevId: 283591888
2019-12-03 11:55:54 -08:00
Sean Silva 67515e8d7a Verifier: Better error message in case of successor operand mismatch.
In particular, print the successor number in the diagnostic.

PiperOrigin-RevId: 283585084
2019-12-03 11:24:31 -08:00
River Riddle 4741ec6af0 Allow analyses to provide a hook 'isInvalidated' to determine if they are truly invalidated.
The hook has the following form:
*   `bool isInvalidated(const AnalysisManager::PreservedAnalyses &)`

Given a preserved analysis set, the analysis returns true if it should truly be
invalidated. This allows for more fine-tuned invalidation in cases where an
analysis wasn't explicitly marked preserved, but may be preserved(or
invalidated) based upon other properties; such as analyses sets.

PiperOrigin-RevId: 283582889
2019-12-03 11:14:20 -08:00
Mahesh Ravishankar 353fb2bd38 Convert MemRefType to a linearized array in SPIR-V lowering.
The SPIR-V lowering used nested !spv.arrays to represented
multi-dimensional arrays, with the hope that in-conjunction with the
layout annotations, the shape and layout of memref can be represented
directly. It is unclear though how portable this representation will
end up being. It will rely on driver compilers implementing complex
index computations faithfully. A more portable approach is to use
linearized arrays to represent memrefs and explicitly instantiate all
the index computation in SPIR-V. This gives added benefit that we can
further optimize the generated code in MLIR before generating the
SPIR-V binary.

PiperOrigin-RevId: 283571167
2019-12-03 10:21:16 -08:00
MLIR Team 2057733ffa Add Python bindings for affine expressions with binary operators.
PiperOrigin-RevId: 283569325
2019-12-03 10:12:11 -08:00
MLIR Team 1df7f4eb9d Add python bindings for ArrayAttr, AffineMapAttr.
PiperOrigin-RevId: 283561252
2019-12-03 09:32:51 -08:00
Alex Zinenko 993e79e9bd Fix ViewOp to have at most one offset operand
As described in the documentation, ViewOp is expected to take an optional
dynamic offset followed by a list of dynamic sizes. However, the ViewOp parser
did not include a check for the offset being a single value and accepeted a
list of values instead.

Furthermore, several tests have been exercising the wrong syntax of a ViewOp,
passing multiple values to the dyanmic stride list, which was not caught by the
parser. The trailing values could have been erronously interpreted as dynamic
sizes. This is likely due to resyntaxing of the ViewOp, with the previous
syntax taking the list of sizes before the offset. Update the tests to use the
syntax with the offset preceding the sizes.

Worse, the conversion of ViewOp to the LLVM dialect assumed the wrong order of
operands with offset in the trailing position, and erronously relied on the
permissive parsing that interpreted trailing dynamic offset values as leading
dynamic sizes. Fix the lowering to use the correct order of operands.

PiperOrigin-RevId: 283532506
2019-12-03 06:23:04 -08:00
Diego Caballero 330d1ff00e AffineLoopFusion: Prevent fusion of multi-out-edge producer loops
tensorflow/mlir#162 introduced a bug that
incorrectly allowed fusion of producer loops with multiple outgoing
edges. This commit fixes that problem. It also introduces a new flag to
disable sibling loop fusion so that we can test producer-consumer fusion
in isolation.

Closes tensorflow/mlir#259

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/259 from dcaballe:dcaballe/fix_multi_out_edge_producer_fusion 578d5661705fd5c56c555832d5e0528df88c5282
PiperOrigin-RevId: 283531105
2019-12-03 06:09:50 -08:00
Stephan Herhut 2125c0e3a8 Extend conversion of SubViewOp to llvm to also support cases where size and stride
are constant (i.e., there are no size and stride operands).

We recently added canonicalization that rewrites constant size and stride operands to
SubViewOp into static information in the type, so these patterns now occur during code
generation.

PiperOrigin-RevId: 283524688
2019-12-03 05:11:49 -08:00
Lei Zhang 1af9633d85 [spirv] Add spv.SubgroupBallotKHROp
PiperOrigin-RevId: 283522284
2019-12-03 04:49:56 -08:00
Alexander Belyaev d44e865020 [Linalg] Update/fix documentation for linalg.indexed_generic.
PiperOrigin-RevId: 283503642
2019-12-03 01:55:54 -08:00
Alex Zinenko fdbb99cd62 Add linkage support to LLVMFuncOp
A recent commit introduced the Linkage attribute to the LLVM dialect and used
it in the Global Op. Also use it in LLVMFuncOp. As per LLVM Language Reference,
if the linkage attribute is omitted, the function is assumed to have external
linkage.

PiperOrigin-RevId: 283493299
2019-12-03 00:26:44 -08:00
Lei Zhang 16a9296bc8 [spirv] NFC: reorder sections in SPIRVBase.td
Put extensions and capabilities at the very beginning because
they will be referenced later by other definitions.

PiperOrigin-RevId: 283416972
2019-12-02 14:22:10 -08:00
Lei Zhang 364b92fa10 NFC: use `&&` instead of `and`
PiperOrigin-RevId: 283392575
2019-12-02 12:27:14 -08:00
Aart Bik 3126004a5a [VectorOps] Add legality rules to broadcast
PiperOrigin-RevId: 283360101
2019-12-02 09:57:27 -08:00
Lei Zhang b41162b3af [ODS] Generate builders taking unwrapped value and defaults for attributes
Existing builders generated by ODS require attributes to be passed
in as mlir::Attribute or its subclasses. This is okay foraggregate-
parameter builders, which is primarily to be used by programmatic
C++ code generation; it is inconvenient for separate-parameter
builders meant to be called in manually written C++ code because
it requires developers to wrap raw values into mlir::Attribute by
themselves.

This CL extends to generate additional builder methods that
take raw values for attributes and handles the wrapping in the
builder implementation. Additionally, if an attribute appears
late in the arguments list and has a default value, the default
value is supplied in the declaration if possible.

PiperOrigin-RevId: 283355919
2019-12-02 09:33:57 -08:00
Mehdi Amini 5e6795070c Generate dialect documentations in the doc folder for every dialect
Also add a mlir-doc build target to general all the docs

PiperOrigin-RevId: 283353529
2019-12-02 09:18:22 -08:00
brett koonce e7c8e542f4 docs: minor spelling tweaks
Closes tensorflow/mlir#262

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/262 from brettkoonce:docs-sp 6833fc8aa41edd02d8bc7c3cbb84211cb8b0334c
PiperOrigin-RevId: 283352765
2019-12-02 09:13:32 -08:00
Denis Khalikov da3b305e7f Add missing `>` to the description of std.view.
Closes tensorflow/mlir#266

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/266 from denis0x0D:sandbox/miss_char a5f662e1bf103b5009da67d045ee2fcebf822ab0
PiperOrigin-RevId: 283340486
2019-12-02 07:59:10 -08:00
Lei Zhang 4982eaf87c [DRR] Introduce `$_` to ignore op argument match
Right now op argument matching in DRR is position-based, meaning we need to
specify N arguments for an op with N ODS-declared argument. This can be annoying
when we don't want to capture all the arguments. `$_` is to remedy the situation.

PiperOrigin-RevId: 283339992
2019-12-02 07:54:50 -08:00
Lei Zhang 0d22a3fdc8 NFC: Update std.subview op to use AttrSizedOperandSegments
This turns a few manually written helper methods into auto-generated ones.

PiperOrigin-RevId: 283339617
2019-12-02 07:52:00 -08:00
JKIsaacLee 4231de7897 add missing '>' in Ch-2
add missing '>' in Ch-2
(tensor<2x3xf64)->(tensor<2x3xf64>)

Closes tensorflow/mlir#283

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/283 from JKIsaacLee:patch-1 b69fe8d51e2a540f7efaded159d35b88778ad159
PiperOrigin-RevId: 283333807
2019-12-02 07:09:05 -08:00
Alexander Belyaev 9630fcbc52 Lower linalg.indexed_generic with libcall to LLVM.
PiperOrigin-RevId: 283328994
2019-12-02 06:30:52 -08:00
Alex Zinenko d5e627f84b Introduce Linkage attribute to the LLVM dialect
LLVM IR supports linkage on global objects such as global variables and
functions. Introduce the Linkage attribute into the LLVM dialect, backed by an
integer storage. Use this attribute on LLVM::GlobalOp and make it mandatory.
Implement parsing/printing of the attribute and conversion to LLVM IR.

See tensorflow/mlir#277.

PiperOrigin-RevId: 283309328
2019-12-02 03:28:10 -08:00
Jacques Pienaar 2235333d58 mlir-tblgen: Dump input records when no generator is set
Follow LLVM's tblgen convention when no generator is set instead of asserting.

PiperOrigin-RevId: 283073690
2019-11-29 10:43:58 -08:00
Jacques Pienaar 52a7415178 Fix redundant convert and use NamedAttributeList as value
* Had leftover call that would result in converting to dictionary attr before
  being implicitedly converted back to NamedAttributeList;
* NamedAttributeList is value typed, so don't use const reference;

PiperOrigin-RevId: 283072576
2019-11-29 10:26:56 -08:00
JKIsaacLee c9721e9a2b Fixed typo in Ch-1 of Toy tutorial
Closes tensorflow/mlir#282

PiperOrigin-RevId: 283064785
2019-11-29 08:48:54 -08:00
Denis Khalikov cd556f25de [spirv] Check that operand of `spirv::CompositeExtractOp` is constant while folding.
Closes tensorflow/mlir#281

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/281 from denis0x0D:sandbox/composite_ex_fold d02d73658bd1b9eaa515eb4e0aee34bc41d4252b
PiperOrigin-RevId: 282971563
2019-11-28 13:27:56 -08:00
Alex Zinenko 2f16bf7ac9 Split out FunctionLike printing/parsing into FunctionImplementation.{h,cpp}
Helper utilies for parsing and printing FunctionLike Ops are only relevant to
the implementation of the Op, not its definition. They depend on
OpImplementation.h and increase the inclusion footprint of FunctionSupport.h,
and do so only to provide some utilities in the "impl" namespace. Move them to
a separate files, similarly to OpDefinition/OpImplementation distinction, and
make only Op implementations use them while keeping headers cleaner. NFC.

PiperOrigin-RevId: 282964556
2019-11-28 11:51:23 -08:00
Jose Ignacio Gomez 0494ef60f7 [Linalg] Change attribute n_loop_types to iterator
This addresses issue tensorflow/mlir#270. Linalg is updated to take the same form
of iterator_types than vector contraction.

Closes tensorflow/mlir#280

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/280 from tetuante:PRissue270 d26d88d090d3765d3b9884bfabdd023143f27287
PiperOrigin-RevId: 282905396
2019-11-28 01:59:55 -08:00
Lei Zhang 5810efe1f1 NFC: A few cleanups for SPIRVLowering
Updated comments and used static instead of anonymous namspace
to hide functions to be consistent with the existing codebase.

PiperOrigin-RevId: 282847784
2019-11-27 15:55:42 -08:00
Lei Zhang a4d7650230 [spirv] NFC: Add getZero() and getOne() static method to ConstantOp
Getting constant zero or one is very common so it merits a special handy
method on spirv::ConstantOp itself.

PiperOrigin-RevId: 282832572
2019-11-27 14:13:01 -08:00
Lei Zhang d4e4387fbf [spirv] Add folders for spv.IAdd and spv.IMul
Adding zero and multiplying one can be common when generating code
for index calculation.

This CL also sorted canonicalize.mlir to alphabetical order.

PiperOrigin-RevId: 282828055
2019-11-27 13:46:52 -08:00
Aart Bik 9f89c34f4b Fixed typo in Toy tutorial (second var e -> var f)
PiperOrigin-RevId: 282810649
2019-11-27 11:58:45 -08:00
Nicolas Vasilache 1fa8c8070b Implement Linalg to loops lowering as a pattern
This CL rewrites the linalg ops to loops transformations as patterns that can be targeted directly from Tablegen. Reliance on OpFolder is removed and to cope with it we introduce local folding patterns that are applied greedily.

PiperOrigin-RevId: 282765550
2019-11-27 07:32:13 -08:00
Aart Bik e2232fbcee [VectorOps] Refine BroadcastOp in VectorOps dialect
Since second argument is always fully overwritten and
shape is define in "to" clause, it is not needed.
Also renamed "into" to "to" now that arg is dropped.

PiperOrigin-RevId: 282686475
2019-11-26 19:52:38 -08:00
Jacques Pienaar f27ceb7261 Add create method that takes equivalent of OperationState with NamedAttributeList
This method is close to creating an OperationState first and then unpacking it
but avoids creating the OperationState and takes a NamedAttributeList for
attributes rather than array of NamedAttribute (to enable reusing an already
created NamedAttributeList).

Reuse this new method via create that takes OperationState. I'll update inferReturnTypes in follow up to also take NamedAttributeList and so a build method that uses both inferReturnTypes and create can reuse the same list.

PiperOrigin-RevId: 282651642
2019-11-26 15:30:35 -08:00
Aart Bik cf97263cb8 [VectorOps] Add a BroadcastOp to the VectorOps dialect
PiperOrigin-RevId: 282643305
2019-11-26 14:43:31 -08:00
David Truby 18aec3e2e5 Add OpenMP dialect to the dialect registry
Closes tensorflow/mlir#244

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/244 from DavidTruby:openmp 30e2638ee678188575dd5aeb3f7fa51d93369f5f
PiperOrigin-RevId: 282607397
2019-11-26 11:38:20 -08:00
Mahesh Ravishankar 03620fa70a Misc changes to lowering to SPIR-V.
These changes to SPIR-V lowering while adding support for lowering
SUbViewOp, but are not directly related.
- Change the lowering of MemRefType to
  !spv.ptr<!spv.struct<!spv.array<...>[offset]>, ..>
  This is consistent with the Vulkan spec.
- To enable testing a simple pattern of lowering functions is added to
  ConvertStandardToSPIRVPass. This is just used to convert the type of
  the arguments of the function. The added function lowering itself is
  not meant to be the way functions are eventually lowered into SPIR-V
  dialect.

PiperOrigin-RevId: 282589644
2019-11-26 10:11:34 -08:00
Nicolas Vasilache 9059cf392d Automated rollback of commit d60133f89b
PiperOrigin-RevId: 282574110
2019-11-26 08:47:48 -08:00
Nicolas Vasilache 109338085d Relax restriction on affine_apply dim and symbol operands
The affine_apply operation is currently "doubly" affine and conflates two things:
1. it applies an affine map to a list of values of type `index` that are defined as either dim or symbol
2. it restricts (and propagates constraints on) the provenance of dims and symbols to a small subset of ops for which more restrictive polyhedral constraints apply.

Point 2. is related to the ability to form so-called static control parts and is related to dependence analysis and legality of transformations.

Point 1. however is completely independent, the only local implication of dims and symbol for affine_apply is that dims compose while symbols concatenate as well as the structural constraint that dims may not be multiplied.

The properties of composition and canonicalization in affine_apply are more generally useful. This CL relaxes the verifier on affine_apply so it can be used more generally.

The relevant affine.for/if/load/store op verifiers already implement the dim and symbol checking.

See this thread for the related discussion: https://groups.google.com/a/tensorflow.org/g/mlir/c/HkwCbV8D9N0/m/8srUNrX6CAAJ

PiperOrigin-RevId: 282562517
2019-11-26 07:39:05 -08:00
Andrew Anderson a50f871e8d Some minor corrections and improvements to LangRef
Some productions in the LangRef were using undefined terminals and non-terminals, which have been added to the EBNF.
The dialect type and dialect attribute productions matched precisely the same structure and have been deduplicated.
The production for ssa-id was ambiguous but the fix is trivial (merging the leading '%') and has been applied.

Closes tensorflow/mlir#265

PiperOrigin-RevId: 282470892
2019-11-25 17:53:52 -08:00
Lei Zhang 13c6e419ca Add support for AttrSizedOperandSegments/AttrSizedResultSegments
Certain operations can have multiple variadic operands and their size
relationship is not always known statically. For such cases, we need
a per-op-instance specification to divide the operands into logical
groups or segments. This can be modeled by attributes.

This CL introduces C++ trait AttrSizedOperandSegments for operands and
AttrSizedResultSegments for results. The C++ trait just guarantees
such size attribute has the correct type (1D vector) and values
(non-negative), etc. It serves as the basis for ODS sugaring that
with ODS argument declarations we can further verify the number of
elements match the number of ODS-declared operands and we can generate
handy getter methods.

PiperOrigin-RevId: 282467075
2019-11-25 17:26:50 -08:00
Nicolas Vasilache 174076a157 Use vector.InsertStridedSlice in Vector -> Vector unrolling
This CL uses the recently added op to finish the implementation of Vector -> Vector unrolling by replacing the "fake join op" by a series of InsertStridedSliceOp.

Test is updated accordingly

PiperOrigin-RevId: 282451126
2019-11-25 15:56:37 -08:00
Nicolas Vasilache 36469f7d2a Add a vector.InsertStridedSliceOp
This new op is the counterpart of vector.StridedSliceOp and will be used for in the pattern rewrites for vector unrolling.

PiperOrigin-RevId: 282447414
2019-11-25 15:37:13 -08:00
MLIR Team 1012c492f0 Allow LLVM::ExtractElementOp to have non-i32 indices.
Also change the text format a bit, so that indices are braced by squares.

PiperOrigin-RevId: 282437095
2019-11-25 14:44:52 -08:00
Ben Vanik 38d7870ee5 Make std.divis and std.diviu support ElementsAttr folding.
PiperOrigin-RevId: 282434465
2019-11-25 14:31:43 -08:00
Mahesh Ravishankar f87b2fd41b NFC: Actually expose the implementation of createGPUToSPIRVLoweringPass.
A mismatch in the function declaration and function definition,
prevented the implementation of the createGPUToSPIRVLoweringPass from
being exposed.

PiperOrigin-RevId: 282419815
2019-11-25 13:19:53 -08:00
Mahesh Ravishankar 7fd46bf258 Add missing rule to generate SPIR-V ABI Attribute using tblgen to CMake.
PiperOrigin-RevId: 282415592
2019-11-25 12:57:16 -08:00
Andy Davis 8fc44a4d13 Update VectorContractionOp to take iterator types and index mapping attributes compatible with linalg ops.
PiperOrigin-RevId: 282412311
2019-11-25 12:40:00 -08:00
Christian Sigg d60133f89b Changing directory shortcut for CPU/GPU runner utils.
Moving cuda-runtime-wrappers.so into subdirectory to match libmlir_runner_utils.so.
Provide parent directory when running test and load .so from subdirectory.

PiperOrigin-RevId: 282410749
2019-11-25 12:30:54 -08:00
Lei Zhang 9b6e6cef68 De-duplicate EnumAttr overrides by defining defaults
EnumAttr should provide meaningful defaults so concrete instances
do not need to duplicate the fields.

PiperOrigin-RevId: 282398431
2019-11-25 11:29:55 -08:00
Mahesh Ravishankar bd485afda0 Introduce attributes that specify the final ABI for a spirv::ModuleOp.
To simplify the lowering into SPIR-V, while still respecting the ABI
requirements of SPIR-V/Vulkan, split the process into two
1) While lowering a function to SPIR-V (when the function is an entry
   point function), allow specifying attributes on arguments and
   function itself that describe the ABI of the function.
2) Add a pass that materializes the ABI described in the function.

Two attributes are needed.
1) Attribute on arguments of the entry point function that describe
   the descriptor_set, binding, storage class, etc, of the
   spv.globalVariable this argument will be replaced by
2) Attribute on function that specifies workgroup size, etc. (for now
   only workgroup size).

Add the pass -spirv-lower-abi-attrs to materialize the ABI described
by the attributes.

This change makes the SPIRVBasicTypeConverter class unnecessary and is
removed, further simplifying the SPIR-V lowering path.

PiperOrigin-RevId: 282387587
2019-11-25 11:19:56 -08:00
Mahesh Ravishankar 1ea231bd39 Allow memref_cast from static strides to dynamic strides.
Memref_cast supports cast from static shape to dynamic shape
memrefs. The same should be true for strides as well, i.e a memref
with static strides can be casted to a memref with dynamic strides.

PiperOrigin-RevId: 282381862
2019-11-25 11:08:56 -08:00
Nicolas Vasilache 01145544aa Add vector.insertelement op
This is the counterpart of vector.extractelement op and has the same
limitations at the moment (static I64IntegerArrayAttr to express position).
This restriction will be filterd in the future.
LLVM lowering will be added in a subsequent commit.

PiperOrigin-RevId: 282365760
2019-11-25 08:47:15 -08:00
Alex Zinenko bf4692dc49 Introduce gpu.func
Introduce a new function-like operation to the GPU dialect to provide a
placeholder for the execution semantic description and to add support for GPU
memory hierarchy.  This aligns with the overall goal of the dialect to expose
the common abstraction layer for GPU devices, in particular by providing an
MLIR unit of semantics (i.e. an operation) for memory modeling.

This proposal has been discussed in the mailing list:
https://groups.google.com/a/tensorflow.org/d/msg/mlir/RfXNP7Hklsc/MBNN7KhjAgAJ
As decided, the "convergence" aspect of the execution model will be factored
out into a new discussion and therefore is not included in this commit. This
commit only introduces the operation but does not hook it up with the remaining
flow. The intention is to develop the new flow while keeping the old flow
operational and do the switch in a simple, separately reversible commit.

PiperOrigin-RevId: 282357599
2019-11-25 08:10:37 -08:00
Ben Vanik d2284f1f0b Support folding of StandardOps with DenseElementsAttr.
PiperOrigin-RevId: 282270243
2019-11-24 19:23:38 -08:00
Lei Zhang ae821fe626 NFC: Wire up DRR settings for SPIR-V canonicalization patterns
This CL added necessary files and settings for using DRR to
write SPIR-V canonicalization patterns and also converted the
patterns for spv.Bitcast and spv.LogicalNot.

PiperOrigin-RevId: 282132786
2019-11-23 06:59:23 -08:00
Lei Zhang aaafeac89b [spirv] NFC: rename test files and sort tests inside
PiperOrigin-RevId: 282132339
2019-11-23 06:58:38 -08:00
Uday Bondhugula 6a101671b0 Make isValidSymbol more powerful
The check in isValidSymbol, as far as a DimOp result went, checked if
the dim op was on a top-level memref. However, any alloc'ed, view, or
subview memref would be fine as long as the corresponding dimension of
that memref is either a static one or was in turn created using a valid
symbol in the case of dynamic dimensions.

Reported-by: Jose Gomez

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#252

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/252 from bondhugula:symbol 7b57dc394df9375e651f497231c6e4525a32a662
PiperOrigin-RevId: 282097114
2019-11-22 22:09:31 -08:00
River Riddle b8ee563449 NFC: Remove unnecessarily guarded tablegen includes.
Support for including a file multiple times was added in tablegen, removing the need for these extra guards. This is because we already insert c/c++ style header guards within each of the specific .td files.

PiperOrigin-RevId: 282076728
2019-11-22 18:01:57 -08:00
Nicolas Vasilache 9a62ec8c96 Fix Windows Build
PiperOrigin-RevId: 282048102
2019-11-22 15:07:31 -08:00
Denis Khalikov a5cda4763f [spirv] Add a canonicalizer for `spirv::LogicalNotOp`.
Add a canonicalizer for `spirv::LogicalNotOp`.
Converts:
* spv.LogicalNot(spv.IEqual(...)) -> spv.INotEqual(...)
* spv.LogicalNot(spv.INotEqual(...)) -> spv.IEqual(...)
* spv.LogicalNot(spv.LogicalEqual(...)) -> spv.LogicalNotEqual(...)
* spv.LogicalNot(spv.LogicalNotEqual(...)) -> spv.LogicalEqual(...)

Also moved the test for spv.IMul to arithemtic tests.

Closes tensorflow/mlir#256

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/256 from denis0x0D:sandbox/canon_logical_not 76ab5787b2c777f948c8978db061d99e76453d44
PiperOrigin-RevId: 282012356
2019-11-22 12:25:52 -08:00
Mahesh Ravishankar 6db8530c26 Add more canonicalizations for SubViewOp.
Depending on which of the offsets, sizes, or strides are constant, the
subview op can be canonicalized in different ways. Add such
canonicalizations, which generalize the existing approach of
canonicalizing subview op only if all of offsets, sizes and shapes are
constants.

PiperOrigin-RevId: 282010703
2019-11-22 12:14:18 -08:00
Lucy Fox 36e8fa84ab Small formatting fix in Tutorial Ch2.
PiperOrigin-RevId: 281998069
2019-11-22 11:04:36 -08:00
Jean-Michel Gorius 104777d8e6 Unify vector op names with other dialects.
Change vector op names from VectorFooOp to Vector_FooOp and from
vector::VectorFooOp to vector::FooOp.

Closes tensorflow/mlir#257

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/257 from Kayjukh:master dfc3a0e04114885aaec8740d5951d6984d6e1577
PiperOrigin-RevId: 281967461
2019-11-22 08:24:49 -08:00
Lucy Fox f7906c9211 Add more detail about locations in Chapter 2 of tutorial.
Resolves issue 241 (tensorflow/mlir#241).

PiperOrigin-RevId: 281867192
2019-11-21 17:26:02 -08:00
Nicolas Vasilache 6755543af5 Move Linalg Transforms that are actually Conversions - NFC
PiperOrigin-RevId: 281844602
2019-11-21 15:41:32 -08:00
River Riddle c35378003c Add support for using the ODS result names as the Asm result names for multi-result operations.
This changes changes the OpDefinitionsGen to automatically add the OpAsmOpInterface for operations with multiple result groups using the provided ODS names. We currently just limit the generation to multi-result ops as most single result operations don't have an interesting name(result/output/etc.). An example is shown below:
// The following operation:
def MyOp : ... {
  let results = (outs AnyType:$first, Variadic<AnyType>:$middle, AnyType);
}

// May now be printed as:
%first, %middle:2, %0 = "my.op" ...

PiperOrigin-RevId: 281834156
2019-11-21 14:55:46 -08:00
Christian Sigg d7c17195a4 Change CUDA tests to use print_memref.
Swap dimensions in all-reduce-op test.

PiperOrigin-RevId: 281791744
2019-11-21 11:26:36 -08:00
River Riddle c621e64150 NFC: Add wrappers around DenseIntElementsAttr/DenseFPElementsAttr::get to avoid the need to cast.
This avoids the need to cast back to the derived type when calling get, i.e. removes the need to do DenseIntElementsAttr::get(...).cast<DenseIntElementsAttr>().

PiperOrigin-RevId: 281772163
2019-11-21 10:08:21 -08:00
Nicolas Vasilache 0abec2744c Fix OSS builds - NFC
PiperOrigin-RevId: 281757979
2019-11-21 09:07:51 -08:00
Nicolas Vasilache 663c2f731b Drop unused function - NFC
PiperOrigin-RevId: 281741923
2019-11-21 07:09:14 -08:00
Nicolas Vasilache 2c4985816f Split Linalg declarative patterns from specific test patterns - NFC
This will make it easier to scale out test patterns and build specific passes that do not interfere with independent testing.

PiperOrigin-RevId: 281736335
2019-11-21 06:40:17 -08:00
Benjamin Kramer c2741d4ea0 Add missing include after LLVM 049043b598
PiperOrigin-RevId: 281732683
2019-11-21 06:12:14 -08:00
Alex Zinenko b5af3784a6 Don't force newline before function attributes
Due to legacy reasons, a newline character followed by two spaces was always
inserted before the attributes of the function Op in pretty form. This breaks
formatting when functions are nested in some other operations. Don't print the
newline and just put the attributes on the same line, which is also more
consistent with module Op. Line breaking aware of indentation can be introduced
separately into the parser if deemed useful.

PiperOrigin-RevId: 281721793
2019-11-21 05:08:19 -08:00
Nicolas Vasilache 8bde4aa1bc Fix OSS build
Add include of ADT/SmallVector.h.
Fixes tensorflow/mlir#254.

PiperOrigin-RevId: 281721705
2019-11-21 04:45:55 -08:00
Aart Bik d05effb705 Fixed typo in 2-d tiled layout
PiperOrigin-RevId: 281671097
2019-11-20 21:32:51 -08:00
River Riddle 4ea92a0586 NFC: Use Region::getBlocks to fix build failure with drop_begin.
PiperOrigin-RevId: 281656603
2019-11-20 19:30:46 -08:00
River Riddle 57ea705f68 Add a document detailing operation traits, how to define them, and the current list.
Traits are an important piece of operation definition, but don't really have a good documentation presence at the moment.

PiperOrigin-RevId: 281649025
2019-11-20 18:40:48 -08:00
MLIR Team 75379a684f Correctly parse empty affine maps.
Previously the test case crashes / produces an error.

PiperOrigin-RevId: 281630540
2019-11-20 18:30:15 -08:00
River Riddle fafb708b9a Merge DCE and unreachable block elimination into a new utility 'simplifyRegions'.
This moves the different canonicalizations of regions into one place and invokes them in the fixed-point iteration of the canonicalizer.

PiperOrigin-RevId: 281617072
2019-11-20 15:53:19 -08:00
Andy Davis d6a70b31be Add VectorContractionOp to the VectorOps dialect.
PiperOrigin-RevId: 281605471
2019-11-20 14:53:57 -08:00
Mahesh Ravishankar 1145cebdab Verify subview op result has dynamic shape, when sizes are specified.
If the sizes are specified as arguments to the subview op, then the
shape must be dynamic as well.

PiperOrigin-RevId: 281591608
2019-11-20 14:16:05 -08:00
MLIR Team 84f4bbc5eb missing outer index %i in search_body
PiperOrigin-RevId: 281580028
2019-11-20 13:06:31 -08:00
Sean Silva e4f83c6c26 Add multi-level DCE pass.
This is a simple multi-level DCE pass that operates pretty generically on
the IR. Its key feature compared to the existing peephole dead op folding
that happens during canonicalization is being able to delete recursively
dead cycles of the use-def graph, including block arguments.

PiperOrigin-RevId: 281568202
2019-11-20 12:55:10 -08:00
Mahesh Ravishankar 19212105dd Changes to SubViewOp to make it more amenable to canonicalization.
The current SubViewOp specification allows for either all offsets,
shape and stride to be dynamic or all of them to be static. There are
opportunities for more fine-grained canonicalization based on which of
these are static. For example, if the sizes are static, the result
memref is of static shape. The specification of SubViewOp is modified
to allow on or more of offsets, shapes and strides to be statically
specified. The verification is updated to ensure that the result type
of the subview op is consistent with which of these are static and
which are dynamic.

PiperOrigin-RevId: 281560457
2019-11-20 12:32:51 -08:00
Nicolas Vasilache fa14d4f6ab Implement unrolling of vector ops to finer-grained vector ops as a pattern.
This CL uses the pattern rewrite infrastructure to implement a simple VectorOps -> VectorOps legalization strategy to unroll coarse-grained vector operations into finer grained ones.
The transformation is written using local pattern rewrites to allow composition with other rewrites. It proceeds by iteratively introducing fake cast ops and cleaning canonicalizing or lowering them away where appropriate.

This is an example of writing transformations as compositions of local pattern rewrites that should enable us to make them significantly more declarative.

PiperOrigin-RevId: 281555100
2019-11-20 11:49:36 -08:00
River Riddle eb418559ef Add a new OpAsmOpInterface to allow for ops to directly hook into the AsmPrinter.
This interface provides more fine-grained hooks into the AsmPrinter than the dialect interface, allowing for operations to define the asm name to use for results directly on the operations themselves. The hook is also expanded to enable defining named result "groups". Get a special name to use when printing the results of this operation.
The given callback is invoked with a specific result value that starts a
result "pack", and the name to give this result pack. To signal that a
result pack should use the default naming scheme, a None can be passed
in instead of the name.

For example, if you have an operation that has four results and you want
to split these into three distinct groups you could do the following:

  setNameFn(getResult(0), "first_result");
  setNameFn(getResult(1), "middle_results");
  setNameFn(getResult(3), ""); // use the default numbering.

This would print the operation as follows:

  %first_result, %middle_results:2, %0 = "my.op" ...

PiperOrigin-RevId: 281546873
2019-11-20 10:45:45 -08:00
Nicolas Vasilache 3c055957de Add StridedMemRef<>::operator[] - NFC
This operator is used for internal debugging purposes.

PiperOrigin-RevId: 281544152
2019-11-20 10:17:13 -08:00
Alexander Belyaev 3825cc46ab Fix the comment to Region block iterators.
PiperOrigin-RevId: 281506693
2019-11-20 06:30:45 -08:00
Alexander Belyaev e50261657f Fix 'the the' typo.
PiperOrigin-RevId: 281501234
2019-11-20 05:38:14 -08:00
Stephan Herhut abb626686d Extend kernel outlining to also consider dim worth inlining.
PiperOrigin-RevId: 281483447
2019-11-20 02:59:35 -08:00
Eric Schweitz 88368a19aa Add some CMake rules for installing headers, mlir-tblgen, and mlir-opt
Closes tensorflow/mlir#246

PiperOrigin-RevId: 281442685
2019-11-19 21:05:16 -08:00
Christian Sigg f868adafee Make type and rank explicit in mcuMemHostRegister function.
Fix registered size of indirect MemRefType kernel arguments.

PiperOrigin-RevId: 281362940
2019-11-19 13:13:02 -08:00
Nicolas Vasilache ee95f6f259 Add VectorOps.StridedSliceOp
The `vector.strided_slice` takes an n-D vector, k-D `offsets` integer array attribute, a
k-D `sizes` integer array attribute, a k-D `strides` integer array attribute and extracts
the n-D subvector at the proper offset.

Returns an n-D vector where the first k-D dimensions match the `sizes` attribute.
The returned subvector contains the elements starting at offset `offsets` and ending at
`offsets + sizes`.

Example:
```
  %1 = vector.strided_slice %0
      {offsets : [0, 2], sizes : [2, 4], strides : [1, 1]}:
    vector<4x8x16xf32> // returns a vector<2x4x16xf32>
```

This op will be useful for progressive lowering within the VectorOp dialect.

PiperOrigin-RevId: 281352749
2019-11-19 12:22:34 -08:00
Nicolas Vasilache 3732ba4def Fix pretty printer corner case in mlir_runner_utils.cpp.
In the particular case where the size of a memref dimension is 1, double printing would happen because printLast was called unconditionally.
This CL fixes the print and updates an incorrect test that should have caught this in the first place.

PiperOrigin-RevId: 281345142
2019-11-19 11:52:27 -08:00
Mehdi Amini c017704cd9 Add a note on commit messages to our developer guide
PiperOrigin-RevId: 281338738
2019-11-19 11:27:23 -08:00
Mehdi Amini d324c613ea Add mention to avoid cl::opt for MLIR passes in the developer guide
PiperOrigin-RevId: 281338448
2019-11-19 11:26:12 -08:00
Diego Caballero dd5a7cb488 Add getRemappedValue to ConversionPatternRewriter
This method is needed for N->1 conversion patterns to retrieve remapped
Values used in the original N operations.

Closes tensorflow/mlir#237

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/237 from dcaballe:dcaballe/getRemappedValue 1f64fadcf2b203f7b336ff0c5838b116ae3625db
PiperOrigin-RevId: 281321881
2019-11-19 11:09:39 -08:00
Eric Schweitz 06fb797b40 Add '*' and '?' and optional brace parse calls to the Parser
Closes tensorflow/mlir#245

PiperOrigin-RevId: 281321459
2019-11-19 10:14:15 -08:00
Alex Zinenko 8961d8e32f Change conversion CLI flag from -lower-to-llvm to -convert-std-to-llvm
The command-line flag name `lower-to-llvm` for the pass performing dialect
conversion from the Standard dialect to the LLVM dialect is misleading and
inconsistent with most of the conversion passses. It leads the user to believe
that there are no restrictions on what can be converted, while in fact only a
subset of the Standard dialect can be converted (with operations from other
dialects converted by separate passes). Use `convert-std-to-llvm` that better
reflects what the pass does and is consistent with most other conversions.

PiperOrigin-RevId: 281238797
2019-11-19 00:34:51 -08:00
Logan Chien 9110af5bec Add dialect-attribute-entry requirement to docs
This commit add `dialect-attribute-entry` requirements on function arguments,
function results, and function attributes to the documentation.

PiperOrigin-RevId: 281227740
2019-11-18 22:50:25 -08:00
Manuel Freiberger 01fb8cf1da Fix the shape of the outcome in the example code.
The toy language uses element-wise multiplication. Transposing and multiplying
two tensors with shape <2, 3> gives a tensor with shape <3, 2>.

Closes tensorflow/mlir#227

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/227 from ombre5733:toy-ch1-docu-fix d79e5d3f9e3d5150a7ac8aac28b899df5a0d10a0
PiperOrigin-RevId: 281221671
2019-11-18 21:52:22 -08:00
Hanhan Wang c614c92fdc Support SPIR-V constant op to take DenseElementsAttr as input.
Iterates each element to build the array. This includes a little refactor to
combine bool/int/float into a function, since they are similar. The only
difference is calling different function in the end.

PiperOrigin-RevId: 281210288
2019-11-18 20:02:05 -08:00
Tian Jin d8563c0e3a Use SmallVectorImpl instead of SmallVector for function parameters (NFC)
Closes tensorflow/mlir#247

PiperOrigin-RevId: 281185661
2019-11-18 16:59:03 -08:00
Alexander Belyaev 8c6a5233d5 Lower linalg.indexed_generic to loops.
PiperOrigin-RevId: 281169885
2019-11-18 16:55:15 -08:00
Uday Bondhugula 613ace94f2 Drop unnecessary dependences from mlir-translate
Closes tensorflow/mlir#243

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/243 from bondhugula:patch-2 fb682996efde001189414a4c7aa59ce42ace7831
PiperOrigin-RevId: 281167834
2019-11-18 16:44:43 -08:00
Andy Davis a6a287335d Fix SubViewOp stride calculation in constant folding.
Adds unit tests for subview offset and stride argument constant folding.

PiperOrigin-RevId: 281161041
2019-11-18 15:01:08 -08:00
River Riddle 9873a29817 Add a parseAttribute<AttrType> overload for the non-type case.
The variant that accepts a type will check that the parsed attribute is a valid instance of AttrType. The non-type variant would silently fail in this case, leading to garbage attribute values.

PiperOrigin-RevId: 281136528
2019-11-18 13:11:36 -08:00
Lei Zhang 1f475e316c Fix gen_spirv_dialect.py regarding 1D/2D/3D Dim symbol name
PiperOrigin-RevId: 281131561
2019-11-18 12:48:24 -08:00
Denis Khalikov 6c77e59bfd [spirv] Add a canonicalizer for BitcastOp.
Convert chained `spirv::BitcastOp` operations into
one `spirv::BitcastOp` operation.

Closes tensorflow/mlir#238

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/238 from denis0x0D:sandbox/canon_bitcast 4352ed4f81b959ec92f849c599e733b62a99c010
PiperOrigin-RevId: 281129234
2019-11-18 12:37:00 -08:00
Jing Pu 563b5910a8 Also elide large array attribute in OpGraph Dump
PiperOrigin-RevId: 281114034
2019-11-18 11:27:43 -08:00
Alex Zinenko 062dd406b1 ConvertStandardToLLVM: replace assertion with graceful failure
The assertion was introduced in the early days of dialect conversion
infrastructure when we had the matching function separate from the rewriting
function. The infrastructure evolved to have a common matchAndRewrite function
and the separate matching function was dropped without chaning the rewriting
that became matchAndRewrite. This has led to assertion being triggered. Return
a matchFailure instead of failing an assertion on unsupported types.

Closes tensorflow/mlir#230

PiperOrigin-RevId: 281113741
2019-11-18 11:26:24 -08:00
Andy Davis 68a8da4a93 Fix Affine Loop Fusion test case reported on github.
This CL utilizies the more robust fusion feasibility analysis being built out in LoopFusionUtils, which will eventually be used to replace the current affine loop fusion pass.

PiperOrigin-RevId: 281112340
2019-11-18 11:20:37 -08:00
Nicolas Vasilache 9732bb533c Standardize all VectorOps class names to be prefixed by Vector - NFC
This improves consistency and will concretely avoid collisions between VectorExtractElementOp and ExtractElementOp when they are included in the same transforms / rewrites.

PiperOrigin-RevId: 281101588
2019-11-18 10:39:07 -08:00
Stephan Herhut f0f3b71d67 Implement folding of pattern dim(subview(_)[...][s1, ..., sn][...], i) -> si.
PiperOrigin-RevId: 281042016
2019-11-18 04:31:33 -08:00
Alex Zinenko b8dc3fd812 Rename CLI flags -lower-gpu-ops-to-*-ops to -convert-gpu-to-*
This makes the flags consistent with the naming scheme used elsewhere in the
codebase for dialect conversions.

PiperOrigin-RevId: 281027517
2019-11-18 02:43:10 -08:00
Jacques Pienaar 8ec002cbec Fix mismatched-tags warning
PiperOrigin-RevId: 280888290
2019-11-16 21:46:10 -08:00
Logan Chien 0fbac09473 Fix attribute dict syntax in the docs
This commit fixes several attribute dict syntax errors in the documentation.

PiperOrigin-RevId: 280726269
2019-11-15 13:41:05 -08:00
Denis Khalikov 68e48ba111 [spirv] Add bit ops
This CL added op definitions for a few bit operations:

* OpBitFieldInsert
* OpBitFieldSExtract
* OpBitFieldUExtract

Closes tensorflow/mlir#233

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/233 from denis0x0D:sandbox/bit_field_ops e7fd85b00d72d483d7992dc42b9cc4d673903455
PiperOrigin-RevId: 280691816
2019-11-15 11:03:19 -08:00
Alex Zinenko f90d5d703a Clarify that identity maps are discarded from the MemRef type
Update LangRef to explicitly mention the type canonicalization rule applied to
MemRef types: identity maps do not contribute to type identification.

PiperOrigin-RevId: 280684904
2019-11-15 10:28:58 -08:00
Lei Zhang a0986bf43d NFC: Convert CmpIPredicate in StandardOps to use EnumAttr
This turns several hand-written functions to auto-generated ones.

PiperOrigin-RevId: 280684326
2019-11-15 10:17:31 -08:00
Lucy Fox 9d7039b001 Modify tutorial and other documentation for consistency, clarity, and correctness.
PiperOrigin-RevId: 280678392
2019-11-15 09:49:25 -08:00
Jacques Pienaar b9fa45864d Use simpler highlighting textmate syntax
Changes from:
https://github-lightshow.herokuapp.com/?utf8=%E2%9C%93&scope=from-url&grammar_format=auto&grammar_url=https%3A%2F%2Fraw.githubusercontent.com%2Fjpienaar%2Fmlir-grammar%2Fmaster%2Fgrammars%2Fmlir.json&grammar_text=&code_source=from-url&code_url=https%3A%2F%2Fraw.githubusercontent.com%2Fjpienaar%2Fmlir-grammar%2Fmaster%2Fsample.mlir&code=

To:
https://github-lightshow.herokuapp.com/?utf8=%E2%9C%93&scope=from-url&grammar_format=auto&grammar_url=https%3A%2F%2Fraw.githubusercontent.com%2Fjpienaar%2Fmlir-grammar%2Fsimpler%2Fgrammars%2Fmlir.json&grammar_text=&code_source=from-url&code_url=https%3A%2F%2Fraw.githubusercontent.com%2Fjpienaar%2Fmlir-grammar%2Fmaster%2Fsample.mlir&code=

Which I think is an improvement.

PiperOrigin-RevId: 280674770
2019-11-15 09:29:40 -08:00
Nicolas Vasilache 615b9ccdf0 Fix build warnings
Delete unused constexpr ints in LowerToLLVMDialect.
Add (void)toStringRef for non-debug builds.

Fixes tensorflow/mlir#232.

PiperOrigin-RevId: 280671014
2019-11-15 09:06:08 -08:00
Lei Zhang 88843ae37c Use aggregate-parameter builder for ops having autogen type-deduction builder
Thus far DRR always invokes the separate-parameter builder (i.e., requiring
a separate parameter for each result-type/operand/attribute) for creating
ops, no matter whether we can auto-generate a builder with type-deduction
ability or not.

This CL changes the path for ops that we can auto-generate type-deduction
builders, i.e., with SameOperandsAndResultType/FirstAttrDerivedResultType
traits. Now they are going through a aggregate-parameter builder (i.e.,
requiring one parameter for all result-types/operands/attributes).
attributes.)

It is expected this approach will be more friendly for future shape inference
function autogen and calling those autogen'd shape inference function without
excessive packing and repacking operand/attribute lists.
Also, it would enable better support for creating ops with optional attributes
because we are not required to provide an Attribute() as placeholder for
an optional attribute anymore.

PiperOrigin-RevId: 280654800
2019-11-15 07:33:54 -08:00
Nicolas Vasilache 264a4635c8 Templatize linalg::LowerToLoops - NFC
This modification will allow to easily plug lowering of linalg ops to different types of loops (affine, loop.for and other future constructs).
This is purely NFC for now.

PiperOrigin-RevId: 280652186
2019-11-15 07:12:51 -08:00
Stephan Herhut 57bafc674e Mark std.view as no-sideeffect.
The same reasoning as for std.subview applies.

PiperOrigin-RevId: 280639308
2019-11-15 05:28:31 -08:00
Stephan Herhut 9c7bceb4fe Mark std.subview as no-sideeffect.
In essence, std.subview is just an abstract indexing transformation (somewhat
akin to a gep in llvm) and by itself has no effect. From a practical perspective
this helps, as it allows to remove dead subview operations.

PiperOrigin-RevId: 280630046
2019-11-15 04:00:31 -08:00
MLIR Team 95d5d35958 Add more navigation to the MLIR toy tutorial.
This comes in the form of:
1. Missing links to next chapters.
2. Table of contents for each page.

PiperOrigin-RevId: 280619053
2019-11-15 02:17:38 -08:00
Lucy Fox 682b9b2b83 Expand on operation definition to clarify the difference between operation and op.
PiperOrigin-RevId: 280555742
2019-11-14 18:16:01 -08:00
Nicolas Vasilache 0b271b7dfe Refactor the LowerVectorTransfers pass to use the RewritePattern infra - NFC
This is step 1/n in refactoring infrastructure along the Vector dialect to make it ready for retargetability and composable progressive lowering.

PiperOrigin-RevId: 280529784
2019-11-14 15:40:07 -08:00
Mahesh Ravishankar a78bd84cf8 NFC: Refactor Dialect Conversion targeting SPIR-V.
Refactoring the conversion from StandardOps/GPU dialect to SPIR-V
dialect:
1) Move the SPIRVTypeConversion and SPIRVOpLowering class into SPIR-V
   dialect.
2) Add header files that expose functions to add patterns for the
   dialects to SPIR-V lowering, as well as a pass that does the
   dialect to SPIR-V lowering.
3) Make SPIRVOpLowering derive from OpLowering class.
PiperOrigin-RevId: 280486871
2019-11-14 12:34:54 -08:00
Andy Davis a4669cd3b4 Adds canonicalizer to SubViewOp which folds constants from base memref and operands into the subview result memref type.
Changes SubViewOp to support zero operands case, when offset, strides and sizes are all constant.

PiperOrigin-RevId: 280485075
2019-11-14 12:23:04 -08:00
Alex Zinenko e0a0ac4b00 Add CMakeLists.txt for AffineToStandard conversion
PiperOrigin-RevId: 280470142
2019-11-14 11:28:29 -08:00
Lei Zhang 796ca609eb [ODS] Fix operation argument population to avoid crash
The `Operator` class keeps an `arguments` field, which contains pointers
to `operands` and `attributes` elements. Thus it must be populated after
`operands` and `attributes` are finalized so to have stable pointers.
SmallVector may re-allocate when still having new elements added, which
will invalidate pointers.

PiperOrigin-RevId: 280466896
2019-11-14 11:03:29 -08:00
Alex Zinenko 971b8dd4d8 Move Affine to Standard conversion to lib/Conversion
This is essentially a dialect conversion and conceptually belongs to
conversions.

PiperOrigin-RevId: 280460034
2019-11-14 10:35:21 -08:00
Alex Zinenko b34a861d5a Make positions of elements in MemRef descriptor private
Previous commits removed all uses of LLVMTypeConverter::k*PosInMemRefDescriptor
outside of the MemRefDescriptor class. These numbers are an implementation
detail and can be hidden under a layer of more semantic APIs.

PiperOrigin-RevId: 280442444
2019-11-14 09:17:38 -08:00
Alex Zinenko bf5916e7a4 Use MemRefDescriptor in Vector-to-LLVM convresion
Following up on the consolidation of MemRef descriptor conversion, update
Vector-to-LLVM conversion to use the helper class that abstracts away the
implementation details of the MemRef descriptor. This also makes the types of
the attributes in emitted llvm.insert/extractelement operations consistently
i64 instead of a mix of index and i64.

PiperOrigin-RevId: 280441451
2019-11-14 09:05:42 -08:00
MLIR Team 62d5b1de45 Adapt code to LLVM API updates.
PiperOrigin-RevId: 280431812
2019-11-14 08:27:19 -08:00
Nicolas Vasilache f2b6ae9991 Move VectorOps to Tablegen - (almost) NFC
This CL moves VectorOps to Tablegen and cleans up the implementation.

This is almost NFC but 2 changes occur:
  1. an interface change occurs in the padding value specification in vector_transfer_read:
     the value becomes non-optional. As a shortcut we currently use %f0 for all paddings.
     This should become an OpInterface for vectorization in the future.
  2. the return type of vector.type_cast is trivial and simplified to `memref<vector<...>>`

Relevant roundtrip and invalid tests that used to sit in core are moved to the vector dialect.

The op documentation is moved to the .td file.

PiperOrigin-RevId: 280430869
2019-11-14 08:15:23 -08:00
Alex Zinenko 7c28de4aef Use MemRefDescriptor in Linalg-to-LLVM conversion
Following up on the consolidation of MemRef descriptor conversion, update
Linalg-to-LLVM conversion to use the helper class that abstracts away the
implementation details of the MemRef descriptor. This required MemRefDescriptor
to become publicly visible. Since this conversion is heavily EDSC-based,
introduce locally an additional wrapper that uses builder and location pointed
to by the EDSC context while emitting descriptor manipulation operations.

PiperOrigin-RevId: 280429228
2019-11-14 08:04:10 -08:00
Lei Zhang a007d4395a [doc] Add debugging tips in ODS and DRR doc regarding mlir-tblgen
PiperOrigin-RevId: 280398956
2019-11-14 04:26:30 -08:00
Alex Zinenko ee5c2256ef Concentrate memref descriptor manipulation logic in one place
Memref descriptor is becoming increasingly complex. Memrefs are manipulated by
multiple standard instructions, each of which has a non-trivial lowering to the
LLVM dialect. This leads to verbose code that manipulates the descriptors
exposing the internals of insert/extractelement opreations. Implement a wrapper
class that contains a memref descriptor and provides semantically named methods
that build the primitive IR operations instead.

PiperOrigin-RevId: 280371225
2019-11-14 00:49:12 -08:00
Jacques Pienaar d1c99e10d0 Do not emit aliases when printing local form
Expand local scope printing to skip printing aliases as aliases are printed out at the top of a module and may not be part of the output generated by local scope print.

PiperOrigin-RevId: 280278617
2019-11-13 14:21:49 -08:00
Nicolas Vasilache 8abda15b3f Replace explicit concatenation by llvm::concat
PiperOrigin-RevId: 280258938
2019-11-13 12:54:29 -08:00
Nicolas Vasilache 0bd6390b54 Deprecate linalg.subview in favor of std.subview
This CL uses the now standard std.subview in linalg.
Two shortcuts are currently taken to allow this port:
1. the type resulting from a view is currently degraded to fully dynamic to pass the SubViewOp verifier.
2. indexing into SubViewOp may access out of bounds since lowering to LLVM does not currently enforce it by construction.

These will be fixed in subsequent commits after discussions.

PiperOrigin-RevId: 280250129
2019-11-13 12:10:09 -08:00
Lucy Fox 40f0c76ee2 Fix glossary formatting.
PiperOrigin-RevId: 280236761
2019-11-13 11:09:42 -08:00
Sean Silva 486f2122cd Add FuncOp::eraseArgument
This is a quite complex operation that users are likely to attempt to write
themselves and get wrong (citation: users=me).

Ideally, we could pull this into FunctionLike, but for now, the
FunctionType rewriting makes it FuncOp specific. We would need some hook
for rewriting the function type (which for LLVM's func op, would need to
rewrite the underlying LLVM type).

PiperOrigin-RevId: 280234164
2019-11-13 10:59:55 -08:00
River Riddle d985c74883 NFC: Refactor block signature conversion to not erase the original arguments.
This refactors the implementation of block signature(type) conversion to not insert fake cast operations to perform the type conversion, but to instead create a new block containing the proper signature. This has the benefit of enabling the use of pre-computed analyses that rely on mapping values. It also leads to a much cleaner implementation overall. The major user facing change is that applySignatureConversion will now replace the entry block of the region, meaning that blocks generally shouldn't be cached over calls to applySignatureConversion.

PiperOrigin-RevId: 280226936
2019-11-13 10:27:53 -08:00
Lucy Fox f45852be6c Create and begin writing glossary.
This creates a central place in the documentation where MLIR-specific terminology is defined. See discussion on the MLIR forum (https://groups.google.com/a/tensorflow.org/g/mlir/c/5YXDSdu76Hk).

PiperOrigin-RevId: 280220365
2019-11-13 09:59:27 -08:00
River Riddle 6df8369941 Rename the current parseSymbolName to parseOptionalSymbolName
The current implementation silently fails if the '@' identifier isn't present, making it similar to the 'optional' parse methods. This change renames the current implementation to 'Optional' and adds a new 'parseSymbolName' that emits an error.

PiperOrigin-RevId: 280214610
2019-11-13 09:32:20 -08:00
Hanhan Wang 85d7fb3324 Make VariableOp instructions be in the first block in the function.
Since VariableOp is serialized during processBlock, we add two more fields,
`functionHeader` and `functionBody`, to collect instructions for a function.
After all the blocks have been processed, we append them to the `functions`.

Also, fix a bug in processGlobalVariableOp. The global variables should be
encoded into `typesGlobalValues`.

PiperOrigin-RevId: 280105366
2019-11-12 18:59:15 -08:00
Mahesh Ravishankar 2be53603e9 Add operations needed to support lowering of AffineExpr to SPIR-V.
Lowering of CmpIOp, DivISOp, RemISOp, SubIOp and SelectOp to SPIR-V
dialect enables the lowering of operations generated by AffineExpr ->
StandardOps conversion into the SPIR-V dialect.

PiperOrigin-RevId: 280039204
2019-11-12 13:20:06 -08:00
River Riddle 8082e3a687 NFC: Change DictionaryAttr::get(StringRef) to use binary search instead of a linear scan.
The elements of a DictionaryAttr are guaranteed to be sorted by name, so we can use a more efficient lookup when searching for an attribute.

PiperOrigin-RevId: 280035488
2019-11-12 13:04:14 -08:00
Mahesh Ravishankar 9d985141ef Make legality check in GPU->SPIR-V lowering of FuncOp kernel specific.
Existing check that sets FuncOp to be dynamically legal was just
checking that the types of the argument are SPIR-V compatible. Since
the current conversion from GPU to SPIR-V does not handle lowering
non-kernel functions, change the legality check to verify that the
FuncOp has the gpu.kernel attribute and has void(void) return type.

PiperOrigin-RevId: 280032782
2019-11-12 12:52:53 -08:00
Lei Zhang b259c26eb0 Add support for OpPhi in loop header block
During deserialization, the loop header block will be moved into the
spv.loop's region. If the loop header block has block arguments,
we need to make sure it is correctly carried over to the block where
the new spv.loop resides.

During serialization, we need to make sure block arguments from the
spv.loop's entry block are not silently dropped.

PiperOrigin-RevId: 280021777
2019-11-12 12:00:28 -08:00
River Riddle 626e1fd95e Add an option to print an operation if a diagnostic is emitted on it
It is often helpful to inspect the operation that the error/warning/remark/etc. originated from, especially in the context of debugging or in the case of a verifier failure. This change adds an option 'mlir-print-op-on-diagnostic' that attaches the operation as a note to any diagnostic that is emitted on it via Operation::emit(Error|Warning|Remark). In the case of an error, the operation is printed in the generic form.

PiperOrigin-RevId: 280021438
2019-11-12 11:59:19 -08:00
Lei Zhang aa9dc9446e Expose an isSubclassOf() method on AttrConstraint
PiperOrigin-RevId: 280021408
2019-11-12 11:58:10 -08:00
Mahesh Ravishankar 104af84f4c Add Conversion to lower loop::ForOp to spirv::LoopOp.
loop::ForOp can be lowered to the structured control flow represented
by spirv::LoopOp by making the continue block of the spirv::LoopOp the
loop latch and the merge block the exit block. The resulting
spirv::LoopOp has a single back edge from the continue to header
block, and a single exit from header to merge.
PiperOrigin-RevId: 280015614
2019-11-12 11:33:27 -08:00
Lei Zhang f4aca03232 [spirv] Properly return when finding error in serialization
PiperOrigin-RevId: 280001339
2019-11-12 10:45:13 -08:00
River Riddle c4a0883a92 Add a printer flag to use local scope when printing IR.
This causes the AsmPrinter to use a local value numbering when printing the IR, allowing for the printer to be used safely in a local context, e.g. to ensure thread-safety when printing the IR. This means that the IR printing instrumentation can also be used during multi-threading when module-scope is disabled. Operation::dump and DiagnosticArgument(Operation*) are also updated to always print local scope, as this is the most common use case when debugging.

PiperOrigin-RevId: 279988203
2019-11-12 09:37:11 -08:00
Jacques Pienaar a6fac0aa29 Update textmate syntax file
Allow comments in more places and fix function params.

PiperOrigin-RevId: 279986797
2019-11-12 09:30:41 -08:00
Lei Zhang 0e2affdf59 Update outdated comment for NativeCodeCall
PiperOrigin-RevId: 279986050
2019-11-12 09:26:08 -08:00
Nicolas Vasilache 51de3f688e Add LLVM lowering of std.subview
A followup CL will replace usage of linalg.subview by std.subview.

PiperOrigin-RevId: 279961981
2019-11-12 07:23:18 -08:00