Commit Graph

2482 Commits

Author SHA1 Message Date
Alexander Belyaev eee9cbdeb7 Add IndexedGenericOp to Linalg.
PiperOrigin-RevId: 279013404
2019-11-06 22:36:25 -08:00
Sean Silva f6188b5b07 Replace some remnant uses of "inst" with "op".
PiperOrigin-RevId: 278961676
2019-11-06 16:09:23 -08:00
Nicolas Vasilache 7f6c6084b5 Add lowering of std.view to LLVM
This CL ports the lowering of linalg.view to the newly introduced std.view.
Differences in implementation relate to std.view having slightly different semantics:
1. a static or dynamic offset can be specified.
2. the size of the (contiguous) shape is passed instead of a range.
3. static size and stride information is extracted from the memref type rather than the range.

Besides these differences, lowering behaves the same.
A future CL will update Linalg to use this unified infrastructure.

PiperOrigin-RevId: 278948853
2019-11-06 15:06:16 -08:00
Ben Vanik 68bd355505 Adding an m_NonZero constant integer matcher.
This is useful for making matching cases where a non-zero value is required more readable, such as the results of a constant comparison that are expected to be equal.

PiperOrigin-RevId: 278932874
2019-11-06 14:01:55 -08:00
Andy Davis b5654d1311 Add ViewOp verification for dynamic strides, and address some comments from previous change.
PiperOrigin-RevId: 278903187
2019-11-06 11:25:54 -08:00
Andy Davis c38dca7f4b Add ViewOp to the StandardOps dialect, which casts a 1D/i8 element type memref type to an N-D memref type.
Proposed in RFC: https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ

Supports creating the N-D memref type with dynamic sizes and at a dynamic offset within the 1D base memref.
This change contains op definition/parsing/printing and tests. Follow up changes will handle constant shape/layout map folding and llvm lowering.

PiperOrigin-RevId: 278869990
2019-11-06 08:54:12 -08:00
River Riddle 146f7de50d NFC: Remove an extra space when printing the 'attributes' prefix before a dictionary.
PiperOrigin-RevId: 278795313
2019-11-05 23:39:52 -08:00
River Riddle 8e0f4860cd Add (parse|print)OptionalAttrDictWithKeyword hooks to simplify parsing attribute dictionaries with regions.
Many operations with regions add an additional 'attributes' prefix when printing the attribute dictionary to differentiate it from the region body. This leads to duplicated logic for detecting when to actually print the attribute dictionary.

PiperOrigin-RevId: 278747681
2019-11-05 17:58:48 -08:00
James Molloy 250a11ae0f [llvm] Allow GlobalOp to take a region for complex initializers
This allows GlobalOp to either take a value attribute (for simple constants) or a region that can
contain IR instructions (that must be constant-foldable) to create a ConstantExpr initializer.

Example:
  // A complex initializer is constructed with an initializer region.
  llvm.mlir.global constant @int_gep() : !llvm<"i32*"> {
    %0 = llvm.mlir.addressof @g2 : !llvm<"i32*">
    %1 = llvm.mlir.constant(2 : i32) : !llvm.i32
    %2 = llvm.getelementptr %0[%1] : (!llvm<"i32*">, !llvm.i32) -> !llvm<"i32*">
    llvm.return %2 : !llvm<"i32*">
  }
PiperOrigin-RevId: 278717836
2019-11-05 15:11:01 -08:00
James Molloy 6b534ecbcb [llvm] Add initial import of LLVM modules to mlir-translate
This adds an importer from LLVM IR or bitcode to the LLVM dialect. The importer is registered with mlir-translate.

Known issues exposed by this patch but not yet fixed:
  * Globals' initializers are attributes, which makes it impossible to represent a ConstantExpr. This will be fixed in a followup.
  * icmp returns i32 rather than i1.
  * select and a couple of other instructions aren't implemented.
  * llvm.cond_br takes its successors in a weird order.

The testing here is known to be non-exhaustive.

I'd appreciate feedback on where this functionality should live. It looks like the translator *from MLIR to LLVM* lives in Target/, but the SPIR-V deserializer lives in Dialect/ which is why I've put this here too.

PiperOrigin-RevId: 278711683
2019-11-05 14:41:38 -08:00
River Riddle 8fa9d82606 NFC: Rename parseOptionalAttributeDict -> parseOptionalAttrDict to match the name of the print method.
PiperOrigin-RevId: 278696668
2019-11-05 13:32:47 -08:00
River Riddle 2366561a39 Add a PatternRewriter hook to merge blocks, and use it to support for folding branches.
A pattern rewriter hook, mergeBlock, is added that allows for merging the operations of one block into the end of another. This is used to support a canonicalization pattern for branch operations that folds the branch when the successor has a single predecessor(the branch block).

Example:
  ^bb0:
    %c0_i32 = constant 0 : i32
    br ^bb1(%c0_i32 : i32)
  ^bb1(%x : i32):
    return %x : i32

becomes:
  ^bb0:
    %c0_i32 = constant 0 : i32
    return %c0_i32 : i32
PiperOrigin-RevId: 278677825
2019-11-05 11:57:38 -08:00
MLIR Team 1f43d0d000 [NVVM] Add mma.sync operation.
PiperOrigin-RevId: 278440547
2019-11-04 12:36:37 -08:00
River Riddle e4a912eb5a Update the SPV dialect type parser to use the methods on DialectAsmParser directly.
This simplifies the implementation quite a bit, and removes the need for explicit string munging. One change is made to some of the enum elements of SPV_DimAttr to ensure that they are proper identifiers; The string form is now prefixed with 'Dim'.

PiperOrigin-RevId: 278027132
2019-11-01 16:55:25 -07:00
Nicolas Vasilache 9fc1772776 Drop spurious debug spew.
PiperOrigin-RevId: 278023371
2019-11-01 16:32:02 -07:00
River Riddle 68cfc89a0d Refactor LinalgDialect::parseType to use the DialectAsmParser methods directly.
This simplifies the implementation, and removes the need to do explicit string manipulation. A utility method 'parseDimensionList' is added to the DialectAsmParser to simplify defining types and attributes that contain shapes.

PiperOrigin-RevId: 278020604
2019-11-01 16:14:10 -07:00
River Riddle e94a8bfca8 Refactor QuantOps TypeParser to use the DialectAsmParser methods directly.
This greatly simplifies the implementation and removes custom parser functionality. The necessary methods are added to the DialectAsmParser.

PiperOrigin-RevId: 278015983
2019-11-01 15:47:03 -07:00
River Riddle 2ba4d802e0 Remove the need for passing a location to parseAttribute/parseType.
Now that a proper parser is passed to these methods, there isn't a need to explicitly pass a source location. The source location can be recovered from the parser as necessary. This removes the need to explicitly decode an SMLoc in the case where we don't need to, which can be expensive.

This requires adding some basic nesting support to the parser for supporting nested parsers to allow for remapping source locations of the nested parsers to the top level parser for accurate diagnostics. This is due to the fact that the attribute and type parsers use different source buffers than the top level parser, as they may be represented in string form.

PiperOrigin-RevId: 278014858
2019-11-01 15:40:16 -07:00
River Riddle 445cc3f6dd Add DialectAsmParser/Printer classes to simplify dialect attribute and type parsing.
These classes are functionally similar to the OpAsmParser/Printer classes and provide hooks for parsing attributes/tokens/types/etc. This change merely sets up the base infrastructure and updates the parser hooks, followups will add hooks as needed to simplify existing handrolled dialect parsers.

This has various different benefits:
*) Attribute/Type parsing is much simpler to define.
*) Dialect attributes/types that contain other attributes/types can now use aliases.
*) It provides a 'spec' with which we may use in the future to auto-generate parsers/printers.
*) Error messages emitted by attribute/type parsers can provide character exact locations rather than "beginning of the string"

PiperOrigin-RevId: 278005322
2019-11-01 14:48:16 -07:00
Lei Zhang 2fa865719b Move BitEnumAttr from SPIRVBase.td to OpBase.td
BitEnumAttr is a mechanism for modelling attributes whose value is
a bitfield. It should not be scoped to the SPIR-V dialect and can
be used by other dialects too.

This CL is mostly shuffling code around and adding tests and docs.
Functionality changes are:

* Fixed to use `getZExtValue()` instead of `getSExtValue()` when
  getting the value from the underlying IntegerAttr for a case.
* Changed to auto-detect whether there is a case whose value is
  all bits unset (i.e., zero). If so handle it specially in all
  helper methods.

PiperOrigin-RevId: 277964926
2019-11-01 11:18:19 -07:00
Mahesh Ravishankar 9cbbd8f4df Support lowering of imperfectly nested loops into GPU dialect.
The current lowering of loops to GPU only supports lowering of loop
nests where the loops mapped to workgroups and workitems are perfectly
nested. Here a new lowering is added to handle lowering of imperfectly
nested loop body with the following properties
1) The loops partitioned to workgroups are perfectly nested.
2) The loop body of the inner most loop partitioned to workgroups can
contain one or more loop nests that are to be partitioned across
workitems. Each individual loops nests partitioned to workitems should
also be perfectly nested.
3) The number of workgroups and workitems are not deduced from the
loop bounds but are passed in by the caller of the lowering as values.
4) For statements within the perfectly nested loop nest partitioned
across workgroups that are not loops, it is valid to have all threads
execute that statement. This is NOT verified.

PiperOrigin-RevId: 277958868
2019-11-01 10:52:06 -07:00
Nicolas Vasilache bd94a10c02 Add Linalg pattern for producer-consumer fusion
This CL adds a simple pattern for specifying producer-consumer fusion on Linalg operations.

Implementing such an extension reveals some interesting properties.
Since Linalg operates on a buffer abstraction, the output buffers are specified as in/out parameters to the ops. As a consequence, there are no SSA use-def chains and one cannot specify complex dag input patterns with the current infrastructure.

Instead this CL uses constraints based on the existing linalg dependence analysis to focus the pattern and refine patterns based on the type of op that last wrote in a buffer.

This is a very local property and is less powerful than the generic dag specification based on SSA use-def chains.

This will be generalized in the future.

PiperOrigin-RevId: 277931503
2019-11-01 08:30:38 -07:00
Lei Zhang 7432234f3c NFC: Use #ifndef in various .td files instead of #ifdef and #else
Upstream LLVM gained support for #ifndef with https://reviews.llvm.org/D61888

This is changed mechanically via the following command:

find . -name "*.td" -exec sed -i -e ':a' -e 'N' -e '$!ba' -e 's/#ifdef \([A-Z_]*\)\n#else/#ifndef \1/g' {} \;

PiperOrigin-RevId: 277789427
2019-10-31 13:29:50 -07:00
Alex Zinenko f9a4d3bdb0 LinalgDependenceGraph: add const modifiers to accessors
MLIR const-correctness policy is to avoid having `const` on IR objects.
LinalgDependenceGraph is not an IR object but an auxiliary data structure.
Furthermore, it is not updated once constructed unlike IR objects. Add const
qualifiers to get* and find* methods of LinalgDependenceGraph since they are
not modifying the graph. This allows transformation functions that require the
dependence graph to take it by const-reference, clearly indicating that they
are not modifying it (and that the graph may have to be recomputed after the
transformation).

PiperOrigin-RevId: 277731608
2019-10-31 08:59:12 -07:00
Denis Khalikov d423d4a338 [spirv] Add cast operations
This CL added op definitions for a few cast operations:

* OpConvertFToU
* OpConvertFToS
* OpConvertSToF
* OpConvertUToF
* OpUConvert
* OpSConvert
* OpFConvert

Also moved the definition of spv.Bitcast to the new file.

Closes tensorflow/mlir#208 and tensorflow/mlir#174

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/208 from denis0x0D:sandbox/cast_ops 79bc9b37398aafddee6cf6beb301807988fe67f9
PiperOrigin-RevId: 277587891
2019-10-30 14:53:04 -07:00
Jing Pu 736ad2061c Dump op location in createPrintOpGraphPass for easier debugging.
PiperOrigin-RevId: 277546527
2019-10-30 11:22:22 -07:00
River Riddle a32f0dcb5d Add support to GreedyPatternRewriter for erasing unreachable blocks.
Rewrite patterns may make modifications to the CFG, including dropping edges between blocks. This change adds a simple unreachable block elimination run at the end of each iteration to ensure that the CFG remains valid.

PiperOrigin-RevId: 277545805
2019-10-30 11:19:24 -07:00
River Riddle 0568e952b6 Add a utility accessor 'has_single_element' for ranges.
This provides an easy way to check if a range has a single element.

PiperOrigin-RevId: 277544647
2019-10-30 11:14:30 -07:00
Lei Zhang cb40e36d3b Fix segfault when no symbol is given to an constraint operand
This fixed the segfault when we see the following pattern:
  Pat<(...), (...), [(... 1, 2, 3), ...]>

PiperOrigin-RevId: 277544300
2019-10-30 11:12:57 -07:00
Nicolas Vasilache 05a5a41416 Add basic support for declarative Linalg transformations
Linalg ops provide a good anchor for pattern matching/rewriting transformations.
This CL adds a simple example of how multi-level tiling may be specified by attaching a simple StringAttr to ops as they are transformed so we can easily specify partial lowering to control transformation application.

This is a first stab at taking advantage of higher-level information contained in Linalg ops and will evolve in the future.

PiperOrigin-RevId: 277497958
2019-10-30 07:12:33 -07:00
Lei Zhang 80213ba5f0 [spirv] Fix gen_spirv_dialect.py and add spv.Unreachable
This CL fixed gen_spirv_dialect.py to support nested delimiters when
chunking existing ODS entries in .td files and to allow ops without
correspondence in the spec. This is needed to pull in the definition
of OpUnreachable.

PiperOrigin-RevId: 277486465
2019-10-30 05:41:18 -07:00
Diego Caballero c87c7f5732 Bugfix: Keep worklistMap in sync with worklist in GreedyPatternRewriter
When we removed a pattern, we removed it from worklist but not from
worklistMap. Then, when we tried to add a new pattern on the same Operation
again, the pattern wasn't added since it already existed in the
worklistMap (but not in the worklist).

Closes tensorflow/mlir#211

PiperOrigin-RevId: 277319669
2019-10-29 10:58:31 -07:00
Lei Zhang 8656af1e82 [spirv] Use LLVM graph traversal utility for PrettyBlockOrderVisitor
This removes a bunch of special tailored DFS code in favor of the common
LLVM utility. Besides, we avoid recursion with system stack given that
llvm::depth_first_ext is iterator based and maintains its own stack.
PiperOrigin-RevId: 277272961
2019-10-29 07:04:00 -07:00
Mahesh Ravishankar 61225d678e Add a convenient operation build method for spirv::SelectOp
The SelectOp always has the same result type as its true/false
value. Add a builder method that uses the operand type to get the
result type.

PiperOrigin-RevId: 277217978
2019-10-28 23:04:36 -07:00
Lei Zhang ca2538e9a7 [spirv] Support OpPhi using block arguments
This CL adds another control flow instruction in SPIR-V: OpPhi.
It is modelled as block arguments to be idiomatic with MLIR.
See the rationale.md doc for "Block Arguments vs PHI nodes".
Serialization and deserialization is updated to convert between
block arguments and SPIR-V OpPhi instructions.

PiperOrigin-RevId: 277161545
2019-10-28 15:58:42 -07:00
Sean Silva 66ec24d833 Parse locations in parseGenericOperation
For ops that recursively re-enter the parser to parse an operation (such as
ops with a "wraps" pretty form), this ensures that the wrapped op will parse
its location, which can then be used for the locations of the wrapping op
and any other implicit ops.

PiperOrigin-RevId: 277152636
2019-10-28 15:11:26 -07:00
Nicolas Vasilache 98226e62ec Standardize Linalg transformations to take an OpBuilder and an OperationFolder - NFC
This will be used to specify declarative Linalg transformations in a followup CL. In particular, the PatternRewrite mechanism does not allow folding and has its own way of tracking erasure.

PiperOrigin-RevId: 277149158
2019-10-28 14:56:20 -07:00
River Riddle 2f4d0c085a Add support for marking an operation as recursively legal.
In some cases, it may be desirable to mark entire regions of operations as legal. This provides an additional granularity of context to the concept of "legal". The `ConversionTarget` supports marking operations, that were previously added as `Legal` or `Dynamic`, as `recursively` legal. Recursive legality means that if an operation instance is legal, either statically or dynamically, all of the operations nested within are also considered legal. An operation can be marked via `markOpRecursivelyLegal<>`:

```c++
ConversionTarget &target = ...;

/// The operation must first be marked as `Legal` or `Dynamic`.
target.addLegalOp<MyOp>(...);
target.addDynamicallyLegalOp<MySecondOp>(...);

/// Mark the operation as always recursively legal.
target.markOpRecursivelyLegal<MyOp>();
/// Mark optionally with a callback to allow selective marking.
target.markOpRecursivelyLegal<MyOp, MySecondOp>([](Operation *op) { ... });
/// Mark optionally with a callback to allow selective marking.
target.markOpRecursivelyLegal<MyOp>([](MyOp op) { ... });
```

PiperOrigin-RevId: 277086382
2019-10-28 10:04:34 -07:00
Christian Sigg e38fe4a7af Print reason why dynamic library could not be loaded during execution.
PiperOrigin-RevId: 277037138
2019-10-28 04:25:15 -07:00
Alexander Belyaev 663f9e0c9f Lookup function declaration in SymbolTable not ModuleOp.
PiperOrigin-RevId: 277033167
2019-10-28 03:45:53 -07:00
Alexander Belyaev 780a108d31 Fix include guards and add tests for OpToFuncCallLowering.
PiperOrigin-RevId: 276859463
2019-10-26 08:21:36 -07:00
River Riddle b69e8ee049 Add support for parsing multiple result name groups.
This allows for parsing things like:

%name_1, %name_2:5, %name_3:2 = "my.op" ...

This is useful for operations that have groups of variadic result values. The
total number of results is expected to match the number of results defined by
the operation.

PiperOrigin-RevId: 276703280
2019-10-25 09:34:02 -07:00
Denis Khalikov dd2e444325 [spirv] AccessChainOp canonicalization.
Combine chained `spirv::AccessChainOp` operations into one
`spirv::AccessChainOp` operation.

Closes tensorflow/mlir#198

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/198 from denis0x0D:sandbox/canon_access_chain 0cb87955a85511071143d62637ff939d0dabc2bd
PiperOrigin-RevId: 276609345
2019-10-24 18:41:34 -07:00
River Riddle 2b61b7979e Convert the Canonicalize and CSE passes to generic Operation Passes.
This allows for them to be used on other non-function, or even other function-like, operations. The algorithms are already generic, so this is simply changing the derived pass type. The majority of this change is just ensuring that the nesting of these passes remains the same, as the pass manager won't auto-nest them anymore.

PiperOrigin-RevId: 276573038
2019-10-24 15:01:09 -07:00
River Riddle ef43b56538 Add support for replacing all uses of a symbol.
This requires reconstructing the attribute dictionary of each operation containing a use.

PiperOrigin-RevId: 276520544
2019-10-24 10:47:27 -07:00
Mehdi Amini f56d8187fa Add missing dependency on MLIRIR on MLIREDSCInterface
MLIRIR includes generated header for interfaces, including these headers require
an extra dependency to ensure these headers are generated before we attempt to
build MLIREDSCInterface.

PiperOrigin-RevId: 276518255
2019-10-24 10:37:56 -07:00
Alexander Belyaev d2ce435dba Add custom lowering of ExpOp for NVVM and ROCM.
PiperOrigin-RevId: 276440911
2019-10-24 01:41:57 -07:00
Alexander Belyaev 9a18ff3d62 Wrap ODS to 80 lines and remove const qualifier for local `int` variable (NFC)
This addresses post-submit comments on 00d2a37e32

PiperOrigin-RevId: 276419770
2019-10-23 22:30:33 -07:00
River Riddle 21ee4e987f Add @below and @above directives to verify-diagnostics.
This simplifies defining expected-* directives when there are multiple that apply to the next or previous line. @below applies the directive to the next non-designator line, i.e. the next line that does not contain an expected-* designator. @above applies to the previous non designator line.

Examples:

// Expect an error on the next line that does not contain a designator.
// expected-remark@below {{remark on function below}}
// expected-remark@below {{another remark on function below}}
func @bar(%a : f32)

// Expect an error on the previous line that does not contain a designator.
func @baz(%a : f32)
// expected-remark@above {{remark on function above}}
// expected-remark@above {{another remark on function above}}

PiperOrigin-RevId: 276369085
2019-10-23 15:56:29 -07:00
Alex Zinenko edffbbcdae Fix "set-but-unused" warning in DialectConversion
The variable in question is only used in an assertion,
leading to a warning in opt builds.

PiperOrigin-RevId: 276352259
2019-10-23 14:32:13 -07:00
Alex Zinenko 0d33703f2a Drop MemRefUtils from the ExecutionEngine
The ExecutionEngine was updated recently to only take the LLVM dialect as
input. Memrefs are no longer expected in the signature of the entry point
function by the executor so there is no need to allocate and free them. The
code in MemRefUtils is therefore dead and furthermore out of sync with the
recent evolution of memref type to support strides. Drop it.

PiperOrigin-RevId: 276272302
2019-10-23 07:43:06 -07:00
Uday Bondhugula ad6925f479 Update loop.for verifier message
fix: nonnegative -> positive

Closes tensorflow/mlir#206

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/206 from bondhugula:bondhugula-patch-1 9a47ca7dfd230180a9df33e9a64b33d02252d30a
PiperOrigin-RevId: 276060885
2019-10-22 07:34:56 -07:00
River Riddle 057ee97c73 NFC: Add support for parsing attributes programmatically via mlir::parseAttribute.
This matches the behavior of the public mlir::parseType, and even uses the internal implementation.

PiperOrigin-RevId: 275989777
2019-10-21 21:34:51 -07:00
Lei Zhang 020f9eb68c [DRR] Allow interleaved operands and attributes
Previously DRR assumes attributes to appear after operands. This was the
previous requirements on ODS, but that has changed some time ago. Fix
DRR to also support interleaved operands and attributes.

PiperOrigin-RevId: 275983485
2019-10-21 20:48:17 -07:00
Lei Zhang d9fe892e42 [spirv] Allow block arguments on spv.Branch(Conditional)
We will use block arguments as the way to model SPIR-V OpPhi in
the SPIR-V dialect.

This CL also adds a few useful helper methods to both ops to
get the block arguments.

Also added tests for branch weight (de)serialization.

PiperOrigin-RevId: 275960797
2019-10-21 17:32:00 -07:00
Christian Sigg b74af4aa5c Unify GPU op definition names with other dialects.
Rename GPU op names from gpu_Foo to GPU_FooOp.

PiperOrigin-RevId: 275882232
2019-10-21 11:10:56 -07:00
River Riddle 03d7be2aca NFC: Elide the value of a UnitAttr within nested attribute dictionaries.
This matches the behavior of the top level attribute dictionary.

PiperOrigin-RevId: 275879828
2019-10-21 11:02:07 -07:00
River Riddle 9ac459e871 Add a Symbol trait to simplify defining operations that represent symbols.
This trait provides accessors for the name, symbol use list methods, verification, with more to be added.

PiperOrigin-RevId: 275864554
2019-10-21 09:58:59 -07:00
Kazuaki Ishizaki 8bfedb3ca5 Fix minor spelling tweaks (NFC)
Closes tensorflow/mlir#177

PiperOrigin-RevId: 275692653
2019-10-20 00:11:34 -07:00
Jacques Pienaar 305dafd3b1 Add missing include to StringMap in Verifier and DialectConversion.
PiperOrigin-RevId: 275656416
2019-10-19 13:39:02 -07:00
Mehdi Amini 5b1345ff76 Add missing include to llvm Allocator.h
This header is not self-contained otherwise.

PiperOrigin-RevId: 275651582
2019-10-19 12:11:01 -07:00
Geoffrey Martin-Noble bc577eaf44 Use new eraseOp instead of replaceOp with empty values
PiperOrigin-RevId: 275631166
2019-10-19 06:04:18 -07:00
Christian Sigg c3e56cd12c Get active source lane predicate from shuffle instruction.
nvvm.shfl.sync.bfly optionally returns a predicate whether source lane was active. Support for this was added to clang in https://reviews.llvm.org/D68892.

Add an optional 'pred' unit attribute to the instruction to return this predicate. Specify this attribute in the partial warp reduction so we don't need to manually compute the predicate.

PiperOrigin-RevId: 275616564
2019-10-19 01:53:25 -07:00
River Riddle 5f6bdd144a NFC: Cleanup the implementation of walkSymbolUses.
Refactor the implementation to be much cleaner by adding a `make_second_range` utility to walk the `second` value of a range of pairs.

PiperOrigin-RevId: 275598985
2019-10-18 21:29:15 -07:00
Lei Zhang c5b9fefddc NFC: Rename SPIR-V serializer find*ID() to get*ID() to be consistent
We use get*() in deserizer and other places across the codebase.

PiperOrigin-RevId: 275582390
2019-10-18 18:16:05 -07:00
Sean Silva 9c9a7e9268 Add support for function result attributes.
This allows dialect-specific attributes to be attached to func results. (or more specifically, FunctionLike ops).

For example:

```
func @f() -> (i32 {my_dialect.some_attr = 3})
```

This attaches my_dialect.some_attr with value 3 to the first result of func @f.

Another more complex example:

```
func @g() -> (i32, f32 {my_dialect.some_attr = "foo", other_dialect.some_other_attr = [1,2,3]}, i1)
```

Here, the second result has two attributes attached.

PiperOrigin-RevId: 275564165
2019-10-18 16:03:28 -07:00
Nicolas Vasilache 9e7e297da3 Lower vector transfer ops to loop.for operations.
This allows mixing linalg operations with vector transfer operations (with additional modifications to affine ops) and is a step towards solving tensorflow/mlir#189.

PiperOrigin-RevId: 275543361
2019-10-18 14:10:10 -07:00
Nicolas Vasilache 2823b68580 Implement lowering of VectorTypeCastOp to LLVM
A VectorTypeCastOp can only be used to lower between statically sized contiguous memrefs of scalar and matching vector type. The sizes and strides are thus fully static and easy to determine.

A relevant test is added.

This is a step towards solving tensorflow/mlir#189.

PiperOrigin-RevId: 275538981
2019-10-18 14:00:06 -07:00
Nicolas Vasilache 151e7e61e8 Automated rollback of commit 575405f4d6
PiperOrigin-RevId: 275461067
2019-10-18 06:45:06 -07:00
Nicolas Vasilache 3e3ab38021 Fix OSS target name GPUtoNVVMTransforms -> MLIRGPUtoNVVMTransforms
This unbreaks the `cmake -G Ninja ../llvm -DLLVM_BUILD_EXAMPLES=ON -DLLVM_TARGETS_TO_BUILD="host"`
 in my local OSS build

PiperOrigin-RevId: 275452330
2019-10-18 05:22:38 -07:00
Stephan Herhut 3622e1833f Use StrEnumAttr for gpu.allreduce op instead of StringAttr to better encode constraints.
PiperOrigin-RevId: 275448372
2019-10-18 04:44:48 -07:00
Christian Sigg fe0ee32da5 Add gpu.barrier op to synchronize invocations of a local workgroup.
Adding gen table for rewrite patterns from GPU to NVVM dialect.

Copy missing op documentation from GPUOps.td to GPU.md.

PiperOrigin-RevId: 275419588
2019-10-18 00:30:44 -07:00
River Riddle 2acc220f17 NFC: Remove trivial builder get methods.
These don't add any value, and some are even more restrictive than the respective static 'get' method.

PiperOrigin-RevId: 275391240
2019-10-17 20:08:34 -07:00
River Riddle 575405f4d6 Automated rollback of commit b65c8bb5d6
PiperOrigin-RevId: 275370861
2019-10-17 17:11:39 -07:00
Geoffrey Martin-Noble 6090643877 Introduce a wrapper around ConversionPattern that operates on the derived class
Analogous to OpRewritePattern, this makes writing conversion patterns more convenient.

PiperOrigin-RevId: 275349854
2019-10-17 15:30:38 -07:00
Nicolas Vasilache b65c8bb5d6 Add EDSC support for loop.for operations
This CL adds support for loop.for operations in EDSC and adds a test.
This will be used in a followup commit to implement lowering of vector_transfer ops so that it works more generally and is not subject to affine constraints.

PiperOrigin-RevId: 275349796
2019-10-17 15:18:34 -07:00
Nicolas Vasilache 5b03e692f6 Decouple Linalg promotion from Linalg tiling - NFC
This CL creates a new Linalg promotion pass that operates on SubViewOp and decouples it from Linalg tiling. This is mostly moving code around.

PiperOrigin-RevId: 275329213
2019-10-17 13:41:17 -07:00
Denis Khalikov a560505d1a [spirv] Add a canonicalization pattern for spv.selection.
Add a canonicalization pattern for spv.selection operation.
Convert spv.selection operation to spv.Select based on
simple pattern.

Closes tensorflow/mlir#183

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/183 from denis0x0D:sandbox/canon_select 43d04d923272dd60b9da39f70bdbc51a5168db62
PiperOrigin-RevId: 275312748
2019-10-17 12:36:47 -07:00
Lei Zhang 057dc41bf6 Allow '_' when pretty printing dialect symbols
'_' is used frequently enough as the separator of words in symbols.
We should allow it in dialect symbols when considering pretty printing.

Also updated LangRef.md regarding pretty form.

PiperOrigin-RevId: 275312494
2019-10-17 12:24:18 -07:00
Nicolas Vasilache 10039d04e2 Rename LoopNestBuilder to AffineLoopNestBuilder - NFC
PiperOrigin-RevId: 275310747
2019-10-17 12:13:59 -07:00
Mehdi Amini 6ebc7318b0 Use a SmallVector instead of an ArrayRef to materialize a temporary local array
This pattern is error prone and unfortunately none of the sanitizer is catching
it at the moment.

Fixes tensorflow/mlir#192

Closes tensorflow/mlir#193

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/193 from joker-eph:fix_array_ref 8092252e64c426c6a8a790b7638f847bea4818b1
PiperOrigin-RevId: 275280201
2019-10-17 10:10:46 -07:00
Lei Zhang 23d21af65c [DRR] Allow capturing and referencing no-result ops
Previously when we bind a symbol to an op in DRR, it means to capture
the op's result(s) and later references will be expanded to result(s).
This means for ops without result, we are replacing the symbol with
nothing. This CL treats non-result op capturing and referencing as a
special case to mean the op itself.

PiperOrigin-RevId: 275269702
2019-10-17 09:02:31 -07:00
Lei Zhang 1358df19ca Add LLVM_DEBUG in RewritersGen.cpp and Pattern.cpp
It's usually hard to understand what went wrong if mlir-tblgen
crashes on some input. This CL adds a few useful LLVM_DEBUG
statements so that we can use mlir-tblegn -debug to figure
out the culprit for a crash.

PiperOrigin-RevId: 275253532
2019-10-17 07:26:22 -07:00
Lei Zhang 0e3efb32c6 [spirv] Implement inliner interface
We just need to implement a few interface hooks to DialectInlinerInterface
and CallOpInterface to gain the benefits of an inliner. :)

Right now only supports some trivial cases:
* Inlining single block with spv.Return/spv.ReturnValue
* Inlining multi block with spv.Return
* Inlining spv.selection/spv.loop without return ops

More advanced cases will require block argument and Phi support.

PiperOrigin-RevId: 275151132
2019-10-16 17:46:19 -07:00
Sana Damani 3940b90d84 Update Chapter 4 of the Toy tutorial
This Chapter now introduces and makes use of the Interface concept
in MLIR to demonstrate ShapeInference.
END_PUBLIC

Closes tensorflow/mlir#191

PiperOrigin-RevId: 275085151
2019-10-16 12:19:39 -07:00
Geoffrey Martin-Noble a3726a13f7 NFC: Update VectorOrTensor -> Shaped
This was missed when the type was renamed.

PiperOrigin-RevId: 275082588
2019-10-16 11:58:26 -07:00
Mahesh Ravishankar 54a8473470 Makes spv.module generated by GPU->SPIRV conversion spec compliant
Makes the spv.module generated by the GPU to SPIR-V conversion SPIR-V
spec compliant (validated using spirv-val from Vulkan tools).

1) Separate out the VulkanLayoutUtils from
DecorateSPIRVCompositeTypeLayoutPass to make it reusable within the
Type converter in SPIR-V lowering infrastructure. This is used to
compute the layout of the !spv.struct used in global variable type
description.
2) Set the capabilities of the spv.module to Shader (needed for use of
Logical Memory Model, and the extensions to
SPV_KHR_storage_buffer_storage_class for use of Storage Buffer)

PiperOrigin-RevId: 275081486
2019-10-16 11:53:07 -07:00
Christian Sigg d2f0f847af Support custom accumulator provided as region to gpu.all_reduce.
In addition to specifying the type of accumulation through the 'op' attribute, the accumulation can now also be specified as arbitrary code region.

Adds a gpu.yield op to specify the result of the accumulation.

Also support more types (integers) and accumulations (mul).

PiperOrigin-RevId: 275065447
2019-10-16 10:43:44 -07:00
Mahesh Ravishankar e7b49eef1d Allow for remapping argument to a Value in SignatureConversion.
The current SignatureConversion framework (part of DialectConversion)
allows remapping input arguments to a function from 1->0, 1->1 or
1->many arguments during conversion. Another case is where the
argument itself is dropped, but it's use are remapped to another
Value*.

An example of this is: The Vulkan/SPIR-V spec requires entry functions
to be of type void(void). The GPU -> SPIR-V conversion implemented
this without having the DialectConversion framework track the
remapping that lead to some undefined behavior. The changes here
addresses that.

PiperOrigin-RevId: 275059656
2019-10-16 10:21:03 -07:00
River Riddle dfe09cc621 Add support for PatternRewriter::eraseOp.
This hook is useful when an operation is known to be dead, and no replacement values make sense.

PiperOrigin-RevId: 275052756
2019-10-16 09:50:57 -07:00
Mehdi Amini f1f9e3b8d1 Fix CMake configuration after introduction of LICM and LoopLikeInterface
b843cc5d5a introduced a new op LICM transformation and a LoopLike interface,
but missed the CMake aspects of it. This should fix the build.

PiperOrigin-RevId: 275038533
2019-10-16 08:37:39 -07:00
Stephan Herhut b843cc5d5a Implement simple loop-invariant-code-motion based on dialect interfaces.
PiperOrigin-RevId: 275004258
2019-10-16 04:28:38 -07:00
Lei Zhang e03e151983 [spirv] Add support for SpecId decoration on spv.specConstant
The SpecId decoration is the handle for providing external specialization.
Similar to descriptor set and binding on global variables, we directly
bake it into assembly parsing and printing.

PiperOrigin-RevId: 274893879
2019-10-15 14:53:30 -07:00
Nicolas Vasilache abf5c60af9 Add conversion for splat of vectors of 2+D
This CL adds a missing lowering for splat of multi-dimensional vectors.
Additional support is also added to the runtime utils library to allow printing memrefs with such vectors.

PiperOrigin-RevId: 274794723
2019-10-15 06:53:08 -07:00
Alex Zinenko c50e53c109 Expose mlir::parseType to bindings
Python bindings currently currently provide a makeScalarType function that
constructs one of the predefined types. It was implemented in the bindings
directly to circumvent the absence of standalone type parsing function. Now
that mlir::parseType has been made available, rely on the core parsing
procedure to construct types from strings in the bindings.

This changes includes a library reshuffling that splits out "CoreAPIs"
implementing the binding helper APIs into a separate library and makes that
dependent on the Parser library.

PiperOrigin-RevId: 274794516
2019-10-15 06:52:04 -07:00
Alex Zinenko 98815cfdd9 AsmPrinter: avoid unused-variable warning
The value defined in a loop was not being used and the function producing it
re-evaluated instead. Use the value to avoid both the warning and the
re-evaluation.

PiperOrigin-RevId: 274794459
2019-10-15 06:51:01 -07:00
River Riddle f29731d17f NFC: Replace usages of Value::getKind with explicit isa/casts.
It is more idiomatic to use the llvm::cast infrastructure for checking the type of a value.

PiperOrigin-RevId: 274684945
2019-10-14 16:21:51 -07:00
River Riddle 96de7091bc Allowing replacing non-root operations in DialectConversion.
When dealing with regions, or other patterns that need to generate temporary operations, it is useful to be able to replace other operations than the root op being matched. Before this PR, these operations would still be considered for legalization meaning that the conversion would either fail, erroneously need to mark these ops as legal, or add unnecessary patterns.

PiperOrigin-RevId: 274598513
2019-10-14 10:01:59 -07:00
Nicolas Vasilache 5c5d83afb4 Fix linalg.subview behavior in (partially) static cases.
When the implementation of the strided memref [RFC](https://groups.google.com/a/tensorflow.org/forum/#!msg/mlir/MaL8m2nXuio/1scRqZa6AQAJ) landed, linalg started using this type instead of the now retired !linalg.view.

As static and partially static cases appear, the stride information needs to be maintained properly. In particular, the result type of the subview op was generally incorrect.

This CL fixes the issue by computing a return type that:
1. always has dynamic sizes, which is generally the only correct way to construct a subview in the absence of data padding and/or code versioning.
2. has the same strides as the base strided memref.

Point 1. above can be further refined but will needs further analysis and canonicalization to optimize the particular case where:
1. The base memref has static size along a given dimension.
2. The subview size can be statically derived (e.g. after canonicalization).
3. *And* the subview size is an even divisor of the base memref.

This 3rd constraint is well-known in the case of tiled layouts that don't assume implicit padding: the boundary tile may be only partial and has size given by `problem_size % tile_size`.

Tests are updated as appropriate.

PiperOrigin-RevId: 274578624
2019-10-14 08:43:53 -07:00
Nicolas Vasilache c2285b619d Add lowering of VectorOps dialect to LLVM to the Linalg LLVM lowering pass
This fixes an omission that prevents Linalg to lower generic ops regions operating on ops in the VectorOps dialect.
To achieve this we simply need to `populateVectorToLLVMConversionPatterns` in the conversion.

Relevant tests are added.

PiperOrigin-RevId: 274577325
2019-10-14 08:43:26 -07:00
Eric Schweitz a3d084848d Add LLVM IR dialect hooks for FP128 and X86_FP80 types
Closes tensorflow/mlir#184

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/184 from schweitzpgi:more-float-types ca27d00510a86ffc9c79c65fb3a0193b5ea097a0
PiperOrigin-RevId: 274288813
2019-10-11 18:35:33 -07:00
Alex Zinenko 8c2ea32072 Emit LLVM IR equivalent of sizeof when lowering alloc operations
Originally, the lowering of `alloc` operations has been computing the number of
bytes to allocate when lowering based on the properties of MLIR type. This does
not take into account type legalization that happens when compiling LLVM IR
down to target assembly. This legalization can widen the type, potentially
leading to out-of-bounds accesses to `alloc`ed data due to mismatches between
address computation that takes the widening into account and allocation that
does not. Use the LLVM IR's equivalent of `sizeof` to compute the number of
bytes to be allocated:
  %0 = getelementptr %type* null, %indexType 0
  %1 = ptrtoint %type* %0 to %indexType
adapted from
http://nondot.org/sabre/LLVMNotes/SizeOf-OffsetOf-VariableSizedStructs.txt

PiperOrigin-RevId: 274159900
2019-10-11 06:33:26 -07:00
Alex Zinenko 71b82bcbf6 LLVM Dialect: introduce llvm.mlir.null operation
Similarly to `llvm.mlir.undef`, this auxiliary operation creates an SSA value
that corresponds to `null` in LLVM IR.  This operation is necessary to model
sizeof(<...>) behavior when allocating memory.

PiperOrigin-RevId: 274158760
2019-10-11 06:32:24 -07:00
Uday Bondhugula 47596f5345 Drop obsolete code from std to llvm memref lowering
- dropping what looks like outdated code post some of the previous
  updates

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#179

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/179 from bondhugula:llfix 2a72ea441fe1b3924802273ffbe9870afeb90f91
PiperOrigin-RevId: 274158273
2019-10-11 06:31:18 -07:00
Alexander Belyaev 7301ac72bc Rename LLVM::exp and LLVM::fmuladd to LLVM::ExpOP and LLVM::FMulAddOp.
PiperOrigin-RevId: 274154655
2019-10-11 05:38:37 -07:00
Alexander Belyaev 00d2a37e32 Add unary ops and ExpOp to Standard Dialect.
PiperOrigin-RevId: 274152154
2019-10-11 05:13:55 -07:00
River Riddle 978b209d38 NFC: Print the generic op form after pass failure.
On failure, the IR is likely to be in an invalid state, meaning the custom printer for some operations may now crash. Using the generic op form prevents this from happening.

PiperOrigin-RevId: 274104146
2019-10-10 21:57:50 -07:00
River Riddle 7a7dcc171d Add support for generating reproducers on pass crash and failure.
This cl adds support for generating a .mlir file containing a reproducer for crashes and failures that happen during pass execution. The reproducer contains a comment detailing the configuration of the pass manager(e.g. the textual description of the pass pipeline that the pass manager was executing), along with the original input module.

Example Output:

// configuration: -pass-pipeline='func(cse, canonicalize), inline'
// note: verifyPasses=false

module {
  ...
}

PiperOrigin-RevId: 274088134
2019-10-10 19:36:54 -07:00
River Riddle b245e9519c NFC: Initialize pass manager option fields inline instead of the class constructor.
PiperOrigin-RevId: 274087577
2019-10-10 19:35:55 -07:00
Alex Zinenko 08a2ce8a14 Standard-to-LLVM conversion: check that operands have LLVM types
In Standard to LLVM dialect conversion, the binary op conversion pattern
implicitly assumed some operands were of LLVM IR dialect type. This is not
necessarily true, for example if the Ops that produce those operands did not
match the existing convresion patterns. Check if all operands are of LLVM IR
dialect type and if not, fail to patch the binary op pattern.

Closes tensorflow/mlir#168

PiperOrigin-RevId: 274063207
2019-10-10 17:19:57 -07:00
Alex Zinenko 4dde19f024 Translation to LLVM: check the validity of module-level Ops
Translation to LLVM expects the entry module to have only specific types of ops
that correspond to LLVM IR entities allowed in a module. Currently those are
restricted to functions and globals. Introduce an additional check at the
module level. Inside individual functions, the check for supported Ops is
already performed, but it accepts all LLVM dialect Ops and wouldn't be
immediately applicable at the module level.

PiperOrigin-RevId: 274058651
2019-10-10 17:19:57 -07:00
Mahesh Ravishankar 28d7f9c052 Add lowering of constant ops to SPIR-V.
The lowering is specified as a pattern and is done only if the result
is a SPIR-V scalar type or vector type.
Handling ConstantOp with index return type needs special handling
since SPIR-V dialect does not have index types. Based on the bitwidth
of the attribute value, either i32 or i64 is chosen.
Other constant lowerings are left as a TODO.

PiperOrigin-RevId: 274056805
2019-10-10 17:19:57 -07:00
River Riddle 6b1cc3c6ea Add support for canonicalizing callable regions during inlining.
This will allow for inlining newly devirtualized calls, as well as give a more accurate cost model(when we have one). Currently canonicalization will only run for nodes that have no child edges, as the child nodes may be erased during canonicalization. We can support this in the future, but it requires more intricate deletion tracking.

PiperOrigin-RevId: 274011386
2019-10-10 17:06:33 -07:00
River Riddle 438dc176b1 Remove the need to convert operations in regions of operations that have been replaced.
When an operation with regions gets replaced, we currently require that all of the remaining nested operations are still converted even though they are going to be replaced when the rewrite is finished. This cl adds a tracking for a minimal set of operations that are known to be "dead". This allows for ignoring the legalization of operations that are won't survive after conversion.

PiperOrigin-RevId: 274009003
2019-10-10 17:06:25 -07:00
Christian Sigg 82dc6c4492 Mark GPU dialect as illegal when lowering to NVVM.
PiperOrigin-RevId: 273948293
2019-10-10 06:32:12 -07:00
Alex Zinenko 5e7959a353 Use llvm.func to define functions with wrapped LLVM IR function type
This function-like operation allows one to define functions that have wrapped
LLVM IR function type, in particular variadic functions. The operation was
added in parallel to the existing lowering flow, this commit only switches the
flow to use it.

Using a custom function type makes the LLVM IR dialect type system more
consistent and avoids complex conversion rules for functions that previously
had to use the built-in function type instead of a wrapped LLVM IR dialect type
and perform conversions during the analysis.

PiperOrigin-RevId: 273910855
2019-10-10 01:34:06 -07:00
Kazuaki Ishizaki f5813ff8e1 Fix typo in QuantizedType method names
Closes tensorflow/mlir#172

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/172 from kiszk:quantops e27b57eac8f4c6ef7ee6a6f7b497d3e2f56f6798
PiperOrigin-RevId: 273879164
2019-10-09 20:32:47 -07:00
MLIR Team 221e661e91 Pre-allocate space for results from a regex match that uses 3 match strings.
That space is 4 StringRefs, not 3, because element 0 of the match always
contains the entire source string.

PiperOrigin-RevId: 273875606
2019-10-09 20:07:46 -07:00
MLIR Team ae6946ec11 Add ::printAsTextualPipeline to Pass and OpPassManager.
Allow printing out pipelines in a format that is as close as possible to the
textual pass pipeline format. Individual passes can override the print function
in order to format any options that may have been used to construct that pass.

PiperOrigin-RevId: 273813627
2019-10-09 13:49:17 -07:00
Christian Sigg 35bb732032 Guard rewriter insertion point during signature conversion.
Avoid unexpected side effect in rewriter insertion point.

PiperOrigin-RevId: 273785794
2019-10-09 11:33:28 -07:00
Mahesh Ravishankar e2ed25bc43 Make SPIR-V lowering infrastructure follow Vulkan SPIR-V validation.
The lowering infrastructure needs to be enhanced to lower into a
spv.Module that is consistent with the SPIR-V spec. The following
changes are needed
1) The Vulkan/SPIR-V validation rules dictates entry functions to have
signature of void(void). This requires changes to the function
signature conversion infrastructure within the dialect conversion
framework. When an argument is dropped from the original function
signature, a function can be specified that when invoked will return
the value to use as a replacement for the argument from the original
function.
2) Some changes to the type converter to make the converted type
consistent with the Vulkan/SPIR-V validation rules,
   a) Add support for converting dynamically shaped tensors to
   spv.rtarray type.
   b) Make the global variable of type !spv.ptr<!spv.struct<...>>
3) Generate the entry point operation for the kernel functions and
automatically compute all the interface variables needed

PiperOrigin-RevId: 273784229
2019-10-09 11:25:58 -07:00
Diego Caballero 3451055614 Add support for some multi-store cases in affine fusion
This PR is a stepping stone towards supporting generic multi-store
source loop nests in affine loop fusion. It extends the algorithm to
support fusion of multi-store loop nests that:
 1. have only one store that writes to a function-local live out, and
 2. the remaining stores are involved in loop nest self dependences
    or no dependences within the function.

Closes tensorflow/mlir#162

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/162 from dcaballe:dcaballe/multi-output-fusion 7fb7dec6fe8b45f5ce176f018bfe37b256420c45
PiperOrigin-RevId: 273773907
2019-10-09 10:37:30 -07:00
Christian Sigg 48f819c113 Change to doxygen comments. NFC.
PiperOrigin-RevId: 273707610
2019-10-09 02:46:37 -07:00
Christian Sigg 7c67ec0f03 Assert that region is not cloned into itself.
PiperOrigin-RevId: 273707291
2019-10-09 02:43:52 -07:00
Smit Hinsu 85b46314c0 Allow dynamic but ranked types in ops with SameOperandsAndResultShape and SameOperandsAndResultType traits
Currently SameOperandsAndResultShape trait allows operands to have tensor<*xf32> and tensor<2xf32> but doesn't allow tensor<?xf32> and tensor<10xf32>.

Also, use the updated shape compatibility helper function in TensorCastOp::areCastCompatible method.

PiperOrigin-RevId: 273658336
2019-10-08 19:37:11 -07:00
River Riddle b3a6ae8363 Update the symbol utility methods to handle the case of unknown operations.
This enhances the symbol table utility methods to handle the case where an unknown operation may define a symbol table. When walking symbols, we now collect all symbol uses before allowing the user to iterate. This prevents the user from assuming that all symbols are actually known before performing a transformation.

PiperOrigin-RevId: 273651963
2019-10-08 18:38:37 -07:00
MLIR Team 7446151236 Add Instance Specific Pass Options.
This allows individual passes to define options structs and for these options to be parsed per instance of the pass while building the pass pipeline from the command line provided textual specification.

The user can specify these per-instance pipeline options like so:
```
struct MyPassOptions : public PassOptions<MyPassOptions> {
  Option<int> exampleOption{*this, "flag-name", llvm:🆑:desc("...")};
  List<int> exampleListOption{*this, "list-flag-name", llvm:🆑:desc("...")};
};

static PassRegistration<MyPass, MyPassOptions> pass("my-pass", "description");
```

PiperOrigin-RevId: 273650140
2019-10-08 18:23:43 -07:00
River Riddle 71c7962201 Add support for parsing/printing non bare-identifier SymbolRefs.
The restriction that symbols can only have identifier names is arbitrary, and artificially limits the names that a symbol may have. This change adds support for parsing and printing symbols that don't fit in the 'bare-identifier' grammar by printing the reference in quotes, e.g. @"0_my_reference" can now be used as a symbol name.

PiperOrigin-RevId: 273644768
2019-10-08 17:45:07 -07:00
Deven Desai 956a831130 [ROCm] Fix the return type for the device function calls from i32 to i64.
This is matching what the runtime library is expecting.

Closes tensorflow/mlir#171

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/171 from deven-amd:deven-rocdl-device-func-i64 80762629a8c34e844ebdc542b34dd783990db9db
PiperOrigin-RevId: 273640767
2019-10-08 17:41:42 -07:00
Denis Khalikov d21ba951de [spirv] Add a pass to decorate the composite types with layout info.
Add a pass to decorate the composite types used by
composite objects in the StorageBuffer, PhysicalStorageBuffer,
Uniform, and PushConstant storage classes with layout information.

Closes tensorflow/mlir#156

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/156 from denis0x0D:sandbox/layout_info_decoration 7c50840fd38ca169a2da7ce9886b52b50c868b84
PiperOrigin-RevId: 273634140
2019-10-08 16:54:11 -07:00
River Riddle 49b29dd186 Add a PatternRewriter hook for cloning a region into another.
This is similar to the `inlineRegionBefore` hook, except the original blocks are unchanged. The region to be cloned *must* not have been modified during the conversion process at the point of cloning, i.e. it must belong an operation that has yet to be converted, or the operation that is currently being converted.

PiperOrigin-RevId: 273622533
2019-10-08 15:45:08 -07:00
Uday Bondhugula 6136f33d59 unroll and jam: fix order of jammed bodies
- bodies would earlier appear in the order (i, i+3, i+2, i+1) instead of
  (i, i+1, i+2, i+3) for example for factor 4.

- clean up hardcoded test cases

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#170

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/170 from bondhugula:ujam b66b405b2b1894a03b376952e32a9d0292042665
PiperOrigin-RevId: 273613131
2019-10-08 15:13:11 -07:00
River Riddle ac91e67375 Add support for walking the uses of a symbol.
MLIR uses symbol references to model references to many global entities, such as functions/variables/etc. Before this change, there is no way to actually reason about the uses of such entities. This change provides a walker for symbol references(via SymbolTable::walkSymbolUses), as well as 'use_empty' support(via SymbolTable::symbol_use_empty). It also resolves some deficiencies in the LangRef definition of SymbolRefAttr, namely the restrictions on where a SymbolRefAttr can be stored, ArrayAttr and DictionaryAttr, and the relationship with operations containing the SymbolTable trait.

PiperOrigin-RevId: 273549331
2019-10-08 10:21:59 -07:00
River Riddle 0dd404e4e1 NFC: Remove unused default cl::opt value.
The default value is never used as the value of the elide option is only used if it has an occurrence.

PiperOrigin-RevId: 273545143
2019-10-08 10:04:28 -07:00
Alex Zinenko 0cdc53a762 Linalg to LLVM lowering: decrease the reliance on symbol lookup in a module
During the conversion, both the original and the converted function may coexist
in the module and have the same symbol name. There is no guarantee which of the
two will be found by the symbol lookup. Avoid returning the result of the
library function lookup when lowering Linalg to Standard or LLVM. Use the
symbol reference instead. After the conversion completes, only one symbol will
remain and the Ops using SymbolRefAttrs will be referring to the correct one.

PiperOrigin-RevId: 273510079
2019-10-08 06:55:25 -07:00
Alex Zinenko 11d12670da GPUToCUDA: attach CUBIN to the nested module rather than to the function
Originally, we were attaching attributes containing CUBIN blobs to the kernel
function called by `gpu.launch_func`. This kernel is now contained in a nested
module that is used as a compilation unit. Attach compiled CUBIN blobs to the
module rather than to the function since we were compiling the module. This
also avoids duplication of the attribute on multiple kernels within the same
module.

PiperOrigin-RevId: 273497303
2019-10-08 05:11:26 -07:00
Alex Zinenko 52e082b6ed GPUToCUDA: emit addressof directly instead of wrapping it into a getter function
Originally, the CUBIN getter function was introduced as a mechanism to
circumvent the absence of globals in the LLVM dialect. It would allocate memory
and populate it with the CUBIN data. LLVM dialect now supports globals and they
are already used to store CUBIN data, making the getter function a trivial
address computation of a global. Emit the address computation directly at the
place of `gpu.launch_func` instead of putting it in a function and calling it.
This simplifies the conversion flow and prepares it for using the
DialectConversion infrastructure.

PiperOrigin-RevId: 273496221
2019-10-08 05:03:42 -07:00
Alex Zinenko 16af5924cb Fuse GenerateCubinAccessors pass into LaunchFunctToCuda
Now that the accessor function is a trivial getter of the global variable, it
makes less sense to have the getter generation as a separate pass. Move the
getter generation into the lowering of `gpu.launch_func` to CUDA calls. This
change is mostly code motion, but the process can be simplified further by
generating the addressof inplace instead of using a call. This is will be done
in a follow-up.

PiperOrigin-RevId: 273492517
2019-10-08 04:35:33 -07:00
Alex Zinenko 90d65d32d6 Use named modules for gpu.launch_func
The kernel function called by gpu.launch_func is now placed into an isolated
nested module during the outlining stage to simplify separate compilation.
Until recently, modules did not have names and could not be referenced. This
limitation was circumvented by introducing a stub kernel at the same name at
the same nesting level as the module containing the actual kernel. This
relation is only effective in one direction: from actual kernel function to its
launch_func "caller".

Leverage the recently introduced symbol name attributes on modules to refer to
a specific nested module from `gpu.launch_func`. This removes the implicit
connection between the identically named stub and kernel functions. It also
enables support for `gpu.launch_func`s to call different kernels located in the
same module.

PiperOrigin-RevId: 273491891
2019-10-08 04:30:32 -07:00
Jing Pu 780f107a57 Update upgrade some uses of mlir::interleave API to take container argument directly.
PiperOrigin-RevId: 273446814
2019-10-07 21:53:11 -07:00
River Riddle a8a73f0640 Add a flag to the AsmPrinter for eliding large ElementsAttrs.
Some modules may have extremely large ElementsAttrs, which makes debugging involving IR dumping extremely slow and painful. This change adds a flag that will elide ElementsAttrs with a "large"(as defined by the user) number of elements by printing "..." instead of the element data.

PiperOrigin-RevId: 273413100
2019-10-07 17:19:20 -07:00
Jing Pu 17606a108b Print result types when dumping graphviz.
PiperOrigin-RevId: 273406833
2019-10-07 16:45:53 -07:00
MLIR Team 6b3462a77b Expose `fuseProducerOf` in Linalg/Utils/Utils.h.
PiperOrigin-RevId: 273384063
2019-10-07 15:01:07 -07:00
Mahesh Ravishankar 37e0e8cf16 Do not add spirv::BitcastOp for cast from signed to unsigned type.
Since MLIR integer types don't make a distinction between signed vs
unsigned integers, during deserialization of SPIR-V binaries, the
OpBitcast might result in a cast from/to the same type. Do not add a
spv.Bitcast operation to the spv.module in these cases.

PiperOrigin-RevId: 273381887
2019-10-07 14:52:00 -07:00
River Riddle aeada290b8 Add a new class, OpPrintingFlags, to enable programmatic control of Operation::print behavior.
This allows for controlling the behavior of the AsmPrinter programmatically, instead of relying exclusively on cl::opt flags. This will also allow for more fine-tuned control of printing behavior per callsite, instead of being applied globally.

PiperOrigin-RevId: 273368361
2019-10-07 13:54:49 -07:00
Mahesh Ravishankar 9e9c3a009a Update UndefOp (de)serialization to generate OpUndef at module level.
The SPIR-V spec recommends all OpUndef instructions be generated at
module level. For the SPIR-V dialect its better for UndefOp to produce
an SSA value for use with other instructions. If UndefOp is to be used
at module level, it cannot produce an SSA value (use of this SSA value
within FuncOp would need implicit capture). To satisfy needs of the
SPIR-V spec while making it simpler to represent UndefOp in the SPIR-V
dialect, the serialization is updated to create OpUndef instruction
at module scope.

PiperOrigin-RevId: 273355526
2019-10-07 12:56:38 -07:00
Lei Zhang ebf584b813 [spirv] Fix function entry block erase after moving to spv.selection
The structured selection/loop's entry block does not have arguments.
If the function's header block is also part of the structured control
flow, we cannot just simply erase it because it may contain arguments
matching the function signature and used by the cloned blocks. Instead,
turn it into a block only containing a spv.Branch op.

Also, we can directly emit instructions for the spv.selection header
block to the block containing the spv.selection op. This eliminates
unnecessary branches in the SPIR-V blob.

Added a test for nested spv.loop.

PiperOrigin-RevId: 273351424
2019-10-07 12:37:13 -07:00
Uday Bondhugula 89e7a76a1c fix simplify-affine-structures bug
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#157

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/157 from bondhugula:quickfix bd1fcd79825fc0bd5b4a3e688153fa0993ab703d
PiperOrigin-RevId: 273316498
2019-10-07 10:04:50 -07:00
Christian Sigg 9f11b0e12f Change Block::getParent() to be a const function. This is only necessary because ilist_node_with_parent specifically requires a 'getParent() const' method. If/When ilist_node removes this constraint we should drop the const to fit the rest of the MLIR const model.
PiperOrigin-RevId: 273316153
2019-10-07 10:03:28 -07:00
Nicolas Vasilache 9f98bcda47 Support AllocOp terminal in Linalg::AliasAnalysis.
Now that linalg.view and strided memrefs are unified, there is no reason to
disallow AllocOp in alias analysis. This CLs adds support for AllocOp which allows writing shorter tests that do not require explicitly creating a view for
each operation.

PiperOrigin-RevId: 273303060
2019-10-07 09:01:18 -07:00
Jacques Pienaar 27e8efedf8 Add DialectType and generate docs for dialect types
Add new `typeDescription` (description was already used by base constraint class) field to type to allow writing longer descriptions about a type being defined. This allows for providing additional information/rationale for a defined type. This currently uses `description` as the heading/name for the type in the generated documentation.

PiperOrigin-RevId: 273299332
2019-10-07 08:41:13 -07:00
MLIR Team da984166df Add OpaqueLoc to MLIR locations.
See RFC: https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/xE2IzfhE3Wg.

Opaque location stores two pointers, one of them points to some data structure that is external to MLIR, and the other one is unique for each type and represents type id of that data structure. OpaqueLoc also stores an optional location that can be used if the first one is not suitable.
OpaqueLoc is managed similar to FileLineColLoc. It is passed around by MLIR transformations and can be used in compound locations like CallSiteLoc.

PiperOrigin-RevId: 273266510
2019-10-07 05:05:42 -07:00
Christian Sigg 7c765d97f9 Support reduction of partial warps.
gpu.all_reduce now supports block sizes that are not multiple of 32.

PiperOrigin-RevId: 273255204
2019-10-07 03:31:00 -07:00
Jacques Pienaar 77672c9777 Enable emitting dialect summary & description during op generation
Sort ops per dialect and emit summary & description (if provided) of each dialect before emitting the ops of the dialect.

PiperOrigin-RevId: 273077138
2019-10-05 12:21:51 -07:00
Geoffrey Martin-Noble 18db4ce493 Allow element type traits to operate on scalars
This allows confirming that a scalar argument has the same element type as a shaped one. It's easy to validate a type is shaped on its own if that's desirable, so this shouldn't make that use case harder. This matches the behavior of other traits that operate on element type (e.g. AllElementTypesMatch). Also this makes the code simpler because now we just use getElementTypeOrSelf.

Verified that all uses in core already check the type is shaped in another way.

PiperOrigin-RevId: 273068507
2019-10-05 10:06:06 -07:00
Lei Zhang c020480fc6 [spirv] Allow return ops to be in control flow ops
Use `getParentOfType<FunctionOp>()` instead of `cast<FuncOp>(getParentOp())`
to avoid crash when return ops are used inside spv.selection/spv.loop.

PiperOrigin-RevId: 273006041
2019-10-04 20:08:52 -07:00
Mahesh Ravishankar 3f8bde40cb Add spv.Undef op to support OpUndef instruction in SPIR-V.
Adding support for OpUndef instruction. Updating the dialect
generation script to fix a few bugs in the instruction spec
generation.

PiperOrigin-RevId: 272975685
2019-10-04 16:00:22 -07:00
Mahesh Ravishankar 77a809d7a1 Add some utility builder functions for SPIR-V operations.
Add builder functions for spv._address_of, spv.EntryPoint,
spv.ExecutionMode and spv.Load to make it easier to create these
operations.
Fix a minor bug in printing of spv.EntryPoint
Add a utility function to get the attribute name associated with a
decoration.

PiperOrigin-RevId: 272952846
2019-10-04 14:02:48 -07:00
Nicolas Vasilache 754ea72794 Replace constexpr MemRefType::kDynamicStrideOrOffset by a MemRefType:;getDynamicStrideOrOffset() method - NFC
This fixes global ODR-use issues, some of which manifest in Parser.cpp.

Fixes tensorflow/mlir#167.

PiperOrigin-RevId: 272886347
2019-10-04 08:58:09 -07:00
Nicolas Vasilache 516f6a3477 Add missing Linalg lowerings to allow roundtrip.mlir to lower to LLVM
Certain lowering patterns were reported as [missing](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/dkdmHa77sSQ).

This CL adds them and allows Linalg/roundtrip.mlir and Linalg/loops.mlir to lower to LLVM directly. Those 2 tests are updated to additionally check that the direct lowering to LLVM does not crash.

The following points, left as TODOs still need to be addressed for correct end-to-end execution:
1. the lowering for ConvOp needs to pass attributes such as strides and dilations; the external library call needs to support it.
2. the lowering for GenericOp needs to support lowering to loops as a DialectConversion pattern. This is blocked on the DialectConversion infrastructure accepting an OperationFolder.

PiperOrigin-RevId: 272878131
2019-10-04 08:07:54 -07:00
Deven Desai d064469f6f Moving the GPUIndexIntrinsicOpLowering template to a common location
The GPUIndexIntrinsicOpLowering template is currently used by the code in both the GPUToNVVM and GPUToROCDL dirs.
Moving it to a common location to remove code duplication.

Closes tensorflow/mlir#163

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/163 from deven-amd:deven-refactor-gpu-index-ops-lowering b8dc2a5f5353df196039b6ff2ad42106028693ed
PiperOrigin-RevId: 272863297
2019-10-04 06:20:05 -07:00
Christian Sigg 85dcaf19c7 Fix typos, NFC.
PiperOrigin-RevId: 272851237
2019-10-04 04:37:53 -07:00
River Riddle 5830f71a45 Add support for inlining calls with different arg/result types from the callable.
Some dialects have implicit conversions inherent in their modeling, meaning that a call may have a different type that the type that the callable expects. To support this, a hook is added to the dialect interface that allows for materializing conversion operations during inlining when there is a mismatch. A hook is also added to the callable interface to allow for introspecting the expected result types.

PiperOrigin-RevId: 272814379
2019-10-03 23:10:51 -07:00
River Riddle a20d96e436 Update the Inliner pass to work on SCCs of the CallGraph.
This allows for the inliner to work on arbitrary call operations. The updated inliner will also work bottom-up through the callgraph enabling support for multiple levels of inlining.

PiperOrigin-RevId: 272813876
2019-10-03 23:05:21 -07:00
Feng Liu 8c95223e3c Add `axis` attribute to the quant.stats op
The first dim length of the axisStats attribute should equals to the slice size
of the input argument when splitted by the axis dimension.

PiperOrigin-RevId: 272798042
2019-10-03 20:29:08 -07:00
MLIR Team 0dfa7fc908 Add fpext and fptrunc to the Standard dialect and includes conversion to LLVM
PiperOrigin-RevId: 272768027
2019-10-03 16:37:24 -07:00
Christian Sigg 496f4590a1 Generalize parse/printBinaryOp to parse/printOneResultOp.
PiperOrigin-RevId: 272722539
2019-10-03 13:00:12 -07:00
Nicolas Vasilache 218f0e611a Add syntactic sugar for strided memref parsing.
This CL implements the last remaining bit of the [strided memref proposal](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).

The syntax is a bit more explicit than what was originally proposed and resembles:
  `memref<?x?xf32, offset: 0 strides: [?, 1]>`

Nonnegative strides and offsets are currently supported. Future extensions will include negative strides.

This also gives a concrete example of syntactic sugar for the ([RFC] Proposed Changes to MemRef and Tensor MLIR Types)[https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/-wKHANzDNTg].

The underlying implementation still uses AffineMap layout.

PiperOrigin-RevId: 272717437
2019-10-03 12:34:36 -07:00
Alex Zinenko 0b93c092b6 Make Module::getName return Optional<StringRef>
Module names are optional so it makes more sense to take and return an optional
any time the name is involved. Also update the language reference to reflect
the module names.

PiperOrigin-RevId: 272684698
2019-10-03 10:04:48 -07:00
Alex Zinenko 8633b6bc8e Give modules a name
Modules are now Ops and, as such, can be nested. They do not produce an SSA
value so there is no possibility to refer to them in the IR. Introduce support
for symbol names attached to the module Op so that it can be referred to using
SymbolRefAttrs. The name is optional, for example the implicit top-level module
does not have a name.

PiperOrigin-RevId: 272671600
2019-10-03 08:56:38 -07:00
Alex Zinenko bd4762502c Add parentheses around boolean operators in assert
This removes a warning and is generally a good practice.

PiperOrigin-RevId: 272613597
2019-10-03 01:47:14 -07:00
Alex Zinenko e0d78eac23 NFC: rename Conversion/ControlFlowToCFG to Conversion/LoopToStandard
This makes the name of the conversion pass more consistent with the naming
scheme, since it actually converts from the Loop dialect to the Standard
dialect rather than working with arbitrary control flow operations.

PiperOrigin-RevId: 272612112
2019-10-03 01:35:03 -07:00
Alex Zinenko 44ef5e5525 Disallow index types in memrefs.
As specified in the MLIR language reference and rationale documents, `memref`
types should not be allowed to have `index` as element types. As observed in
https://groups.google.com/a/tensorflow.org/forum/#!msg/mlir/P49hVWqTMNc/nW89a4i_AgAJ
this restriction was lifted when canonicalization unit tests for affine
operations were introduced, without sufficient motivation to lift the
restriction itself.  The test in question can be trivially rewritten (return
the value from a function instead of storing it to prevent DCE from removing
the producer operation) and the restriction put back in place.

If `memref<...x index>` is relevant for some use cases, the relaxation of the
type system can be implemented separately with appropriate modifications to the
documentation.

PiperOrigin-RevId: 272607043
2019-10-03 00:58:29 -07:00
Nicolas Vasilache 9604bb6269 Extract MemRefType::getStridesAndOffset as a free function and fix dynamic offset determination.
This also adds coverage with a missing test, which uncovered a bug in the conditional for testing whether an offset is dynamic or not.

PiperOrigin-RevId: 272505798
2019-10-02 13:25:05 -07:00
Lei Zhang f294e0e513 [spirv] Add support for spv.selection
Similar to spv.loop, spv.selection is another op for modelling
SPIR-V structured control flow. It covers both OpBranchConditional
and OpSwitch with OpSelectionMerge.

Instead of having a `spv.SelectionMerge` op to directly model
selection merge instruction for indicating the merge target,
we use regions to delimit the boundary of the selection: the
merge target is the next op following the `spv.selection` op.
This way it's easier to discover all blocks belonging to
the selection and it plays nicer with the MLIR system.

PiperOrigin-RevId: 272475006
2019-10-02 11:01:57 -07:00
Deven Desai e81b3129b4 [ROCm] Adding pass to lower GPU Dialect to ROCDL Dialect.
This is a follow-up to the PRtensorflow/mlir#146 which introduced the ROCDL Dialect. This PR introduces a pass to lower GPU Dialect to the ROCDL Dialect. As with the previous PR, this one builds on the work done by @whchung, and addresses most of the review comments in the original PR.

Closes tensorflow/mlir#154

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/154 from deven-amd:deven-lower-gpu-to-rocdl 809893e08236da5ab6a38e3459692fa04247773d
PiperOrigin-RevId: 272390729
2019-10-02 01:50:30 -07:00
Jacques Pienaar 2b86e27dbd Show type even if elementsattr is elided in graph
The type is quite useful for debugging and shouldn't be too large.

PiperOrigin-RevId: 272390311
2019-10-02 01:46:12 -07:00
Eric Schweitz 9e6dde3977 Add a pair of hooks to DominanceInfo.
This exposes hooks for accessing internal dominance nodes, and updating the internal DFS numbers.

Closes tensorflow/mlir#151

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/151 from schweitzpgi:dominance_hooks 69d14214a423b816cbd59feffcacdd02f3b5f921
PiperOrigin-RevId: 272287352
2019-10-01 14:06:50 -07:00
Alex Zinenko c760f233b3 Fix and simplify CallOp/CallIndirectOp to LLVM::CallOp conversion
A recent ABI compatibility change affected the conversion from standard
CallOp/CallIndirectOp to LLVM::CallOp by changing its signature. In order to
analyze the signature, the code was looking up the callee symbol in the module.
This is incorrect since, during the conversion, the module may contain both the
original and the converted function op that have the same symbol name. There is
no strict guarantee on which of the two symbols will be found by the lookup.
The conversion was not failing because the type legalizer converts the LLVM
types to themselves making the original and the converted function signatures
ultimately produce the same type.

Instead of looking up the function signature to get the list of result types,
use the types of the CallOp/CallIndirectOp results which must match those of
the function in valid IR. These types are guaranteed to be the original,
unconverted types when converting the operation. Furthermore, this avoids the
need to perform a lookup of a symbol name in the module which may be expensive.

Finally, propagate attributes as-is from the original op to the converted op
since they share the attribute name for the callee of direct calls and the rest
of attributes are not affected by the conversion. This removes the need for
additional contorsions between direct and indirect calls to extract the name of
the optional callee attribute only to insert it back. This also prevents the
conversion from unintentionally dropping the other attributes of the op.

PiperOrigin-RevId: 272218871
2019-10-01 08:41:50 -07:00
Nicolas Vasilache e36337a998 Unify Linalg types by using strided memrefs
This CL finishes the implementation of the Linalg + Affine type unification of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
As a consequence, the !linalg.view type, linalg::DimOp, linalg::LoadOp and linalg::StoreOp can now disappear and Linalg can use standard types everywhere.

PiperOrigin-RevId: 272187165
2019-10-01 05:23:21 -07:00
Christian Sigg 1129931a62 Change all_reduce lowering to support 2D and 3D blocks.
Perform second reduce only with first warp. This requires an additional __sync_threads(), but doesn't need special handling when the last warp is small. This simplifies support for block sizes that are not multiple of 32.

Supporting partial warp reduce will be done in a separate CL.

PiperOrigin-RevId: 272168917
2019-10-01 02:51:15 -07:00
Christian Sigg 8503ffbe3a Add verification error message for ops that require at least one operand or result.
PiperOrigin-RevId: 272153634
2019-10-01 00:57:18 -07:00
River Riddle 1c649d5785 Pass the pointer of the parent pipeline collection pass to PassInstrumentation::run*Pipeline.
For the cases where there are multiple levels of nested pass managers, the parent thread ID is not enough to distinguish the parent of a given pass pipeline. Passing in the parent pass gives an exact anchor point.

PiperOrigin-RevId: 272105461
2019-09-30 17:44:55 -07:00
Denis Khalikov 219421ece7 [spirv] Add array length check.
According to the SPIR-V spec:
"Length is the number of elements in the array. It must be at least 1."

Closes tensorflow/mlir#160

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/160 from denis0x0D:sandbox/array_len 0840dc0986ad0088a3aa7d5d8d3e97d489377ed9
PiperOrigin-RevId: 272094669
2019-09-30 16:43:26 -07:00
Jacques Pienaar f015b020f3 Add missing file from cmakelist
PiperOrigin-RevId: 272054623
2019-09-30 13:37:54 -07:00
Jacques Pienaar 0b81eb928b Enable autogenerating OpInterface method declarations
Add DeclareOpInterfaceFunctions to enable specifying whether OpInterfaceMethods
for an OpInterface should be generated automatically. This avoids needing to
declare the extra methods, while also allowing adding function declaration by way of trait/inheritance.

Most of this change is mechanical/extracting classes to be reusable.

PiperOrigin-RevId: 272042739
2019-09-30 12:42:58 -07:00
Nicolas Vasilache 923b33ea16 Normalize MemRefType lowering to LLVM as strided MemRef descriptor
This CL finishes the implementation of the lowering part of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).

Strided memrefs correspond conceptually to the following templated C++ struct:
```
template <typename Elem, size_t Rank>
struct {
  Elem *ptr;
  int64_t offset;
  int64_t sizes[Rank];
  int64_t strides[Rank];
};
```
The linearization procedure for address calculation for strided memrefs is the same as for linalg views:
`base_offset + SUM_i index_i * stride_i`.

The following CL will unify Linalg and Standard by removing !linalg.view in favor of strided memrefs.

PiperOrigin-RevId: 272033399
2019-09-30 11:58:54 -07:00
Mahesh Ravishankar 2f7bb1e25f Add support for Logical Ops in SPIR-V dialect
Add operations corresponding to OpLogicalAnd, OpLogicalNot,
OpLogicalEqual, OpLogicalNotEqual and OpLogicalOr instructions in
SPIR-V dialect. This needs changes to class hierarchy in SPIR-V
TableGen files to split SPIRVLogicalOp into SPIRVLogicalUnaryOp and
SPIRVLogicalBinaryOp. All derived classes of SPIRVLogicalOp are
updated accordingly.

Update the spirv dialect generation script to
1) Allow specifying base class to use for instruction spec generation
and file name to generate the specification in separately.
2) Use the existing descriptions for operations.
3) Update define_inst.sh to also invoke define_opcode.sh to also
define the corresponding SPIR-V instruction opcode enum.

PiperOrigin-RevId: 272014876
2019-09-30 10:40:36 -07:00
Nicolas Vasilache 1ce524623c Fix MemRefType::getStrides corner case
MemRefType::getStrides uses AffineExpr::walk which operates in post-order from the leaves. In order to compute strides properly, it needs to escape on terminal nodes and analyze binary ops only. This did not work for AffineExpr that consist of a single term (i.e. without a binary op).

This CL fixes the corner case and adds relevant tests.

PiperOrigin-RevId: 271975746
2019-09-30 07:27:39 -07:00
Christian Sigg 3d9679bde4 Switch comments from GPU dialect terms to CUDA terms (NFC).
local workgroup -> block, subgroup -> warp, invocation -> thread.

PiperOrigin-RevId: 271946342
2019-09-30 03:19:45 -07:00
Jacques Pienaar e5a43186d3 Add InferTypeOpTrait & enable generating its member function definition
Use OpInterfaces to add an interface for ops defining a return type function.

This change does not use this trait in any meaningful way, I'll use it in a
follow up to generalize and unify some of the op type traits/constraints. Also,
currently the infer type function can only be manually specified in C++, that should rather be the fallback in future.

PiperOrigin-RevId: 271883746
2019-09-29 17:29:00 -07:00
Jacques Pienaar c57f202c8c Switch explicit create methods to match generated build's order
The generated build methods have result type before the arguments (operands and attributes, which are also now adjacent in the explicit create method). This also results in changing the create method's ordering to match most build method's ordering.

PiperOrigin-RevId: 271755054
2019-09-28 09:35:58 -07:00
Yanan Cao 5f8dff936b Append a newline when dumping a Value.
This is more consistent with other dump methods. Otherwise successive Value dumps are concatenated in same line, hurting readability.

PiperOrigin-RevId: 271669846
2019-09-27 16:20:46 -07:00
Nicolas Vasilache bc4984e4f7 Add TODO to revisit coupling of CallOp to MemRefType lowering
PiperOrigin-RevId: 271619132
2019-09-27 12:03:00 -07:00
Uday Bondhugula 74eabdd14e NFC - clean up op accessor usage, std.load/store op verify, other stale info
- also remove stale terminology/references in docs

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#148

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/148 from bondhugula:cleanup e846b641a3c2936e874138aff480a23cdbf66591
PiperOrigin-RevId: 271618279
2019-09-27 11:58:24 -07:00
Nicolas Vasilache ddf737c5da Promote MemRefDescriptor to a pointer to struct when passing function boundaries in LLVMLowering.
The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues.
Solving the ABI problem generally is a very complex problem and most likely involves taking
a dependence on clang that we do not want atm.

A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement.

PiperOrigin-RevId: 271591708
2019-09-27 09:57:36 -07:00
Nicolas Vasilache 6543e99fe5 Fix JitRunner.cpp Error creation pattern and reactivate tests.
linalg_integration_test.mlir and simple.mlir were temporarily disabled due to an OSS-only failure.

The issue is that, once created, an llvm::Error must be explicitly checked before it can be discarded or overwritten.

This CL fixes the issue and reenable the test.

PiperOrigin-RevId: 271589651
2019-09-27 09:56:40 -07:00
Deven Desai fee40fef5c [ROCm] Adding ROCDL Dialect.
This commit introduces the ROCDL Dialect (i.e. the ROCDL ops + the code to lower those ROCDL ops to LLWM intrinsics/functions). Think of ROCDL Dialect as analogous to the NVVM Dialect, but for AMD GPUs. This patch contains just the essentials needed to get a simple example up and running. We expect to make further additions to the ROCDL Dialect.

This is the first of 3 commits, the follow-up will be:
 * add a pass that lowers GPU Dialect to ROCDL Dialect
 * add a "mlir-rocm-runner" utility

Closes tensorflow/mlir#146

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/146 from deven-amd:deven-rocdl-dialect e78e8005c75a78912631116c78dc844fcc4b0de9
PiperOrigin-RevId: 271511259
2019-09-27 00:22:32 -07:00
Nicolas Vasilache 445232df0b Decouple tiling from fusion in Linalg.
This CL modifies the linalg-fusion pass such that it does not tile anymore as part of the pass. Tiling is a separate concern that enables linalg fusion but should happen before.
This makes fusion more composable with other decisions.
In particular the fusion pass now becomes greedy and only applies the transformation on a best-effort basis.

This should also let fusion work in a multi-hop fashion with chains of producer/consumers.

Since the fusion pass does not perform tiling anymore, tests are rewritten to be in pretiled form and make the intent of the test clearer (albeit more verbose).

PiperOrigin-RevId: 271357741
2019-09-26 08:44:31 -07:00
Alex Zinenko 99be3351b8 Drop support for memrefs from JitRunner
The support for functions taking and returning memrefs of floats was introduced
in the first version of the runner, created before MLIR had reliable lowering
of allocation/deallocation to library calls.  It forcibly runs MLIR
transformation convering affine, loop and standard dialects into the LLVM
dialect, unlike the other runner flows that accept the LLVM dialect directly.
Memref support leads to more complex layering and is generally fragile.  Drop
it in favor of functions returning a scalar, or library-based function calls to
print memrefs and other data structures.

PiperOrigin-RevId: 271330839
2019-09-26 05:42:01 -07:00
Christian Sigg 116dac00ba Add AllReduceOp to GPU dialect with lowering to NVVM.
The reduction operation is currently fixed to "add", and the scope is fixed to "workgroup".

The implementation is currently limited to sizes that are multiple 32 (warp size) and no larger than 1024.

PiperOrigin-RevId: 271290265
2019-09-26 00:17:50 -07:00
Lei Zhang 94298cea93 Remove unused variables and methods to address compiler warnings
PiperOrigin-RevId: 271256784
2019-09-25 19:05:30 -07:00
Mahesh Ravishankar 6f0e65441c Add spv.Bitcast operation to SPIR-V dialect
Support the OpBitcast instruction of SPIR-V using the spv.Bitcast
operation. The semantics implemented in the dialect differ from the
SPIR-V spec in that the dialect does not allow conversion to/from
pointer types from/to non-pointer types.

PiperOrigin-RevId: 271255957
2019-09-25 19:01:53 -07:00
Jing Pu 47a7021cc3 Change the return type of createPrintCFGGraphPass to match other passes.
PiperOrigin-RevId: 271252404
2019-09-25 18:33:47 -07:00
Lei Zhang ae13c28f3f [spirv] Add SPV_UnaryOp and spv.FNegate
This CL also moves common parsers and printers to the
same section in SPIRVOps.cpp.

PiperOrigin-RevId: 271233546
2019-09-25 16:35:08 -07:00
Mahesh Ravishankar 3a4bee0fe1 Miscellaneous fixes to SPIR-V Deserializer (details below).
1) Process and ignore the following debug instructions: OpSource,
OpSourceContinued, OpSourceExtension, OpString, OpModuleProcessed.
2) While processing OpTypeInt instruction, ignore the signedness
specification. Currently MLIR doesnt make a distinction between signed
and unsigned integer types.
3) Process and ignore BufferBlock decoration (similar to Buffer
decoration). StructType needs to be enhanced to track this attribute
since its needed for proper validation checks.
4) Report better error for unhandled instruction during
deserialization.

PiperOrigin-RevId: 271057060
2019-09-24 22:51:02 -07:00
Lei Zhang cf00feed03 [spirv] Replace bitwiseCast with llvm::bit_cast
PiperOrigin-RevId: 271035618
2019-09-24 19:25:02 -07:00
Uday Bondhugula 458ede8775 Introduce splat op + provide its LLVM lowering
- introduce splat op in standard dialect (currently for int/float/index input
  type, output type can be vector or statically shaped tensor)
- implement LLVM lowering (when result type is 1-d vector)
- add constant folding hook for it
- while on Ops.cpp, fix some stale names

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#141

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/141 from bondhugula:splat 48976a6aa0a75be6d91187db6418de989e03eb51
PiperOrigin-RevId: 270965304
2019-09-24 12:44:58 -07:00
Nicolas Vasilache 42d8fa667b Normalize lowering of MemRef types
The RFC for unifying Linalg and Affine compilation passes into an end-to-end flow with a predictable ABI and linkage to external function calls raised the question of why we have variable sized descriptors for memrefs depending on whether they have static or dynamic dimensions  (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).

This CL standardizes the ABI on the rank of the memrefs.
The LLVM struct for a memref becomes equivalent to:
```
template <typename Elem, size_t Rank>
struct {
  Elem *ptr;
  int64_t sizes[Rank];
};
```

PiperOrigin-RevId: 270947276
2019-09-24 11:21:49 -07:00
Christian Sigg 74cdbf5909 Clone called functions into nested GPU module.
PiperOrigin-RevId: 270891190
2019-09-24 06:29:54 -07:00
Christian Sigg eba6014cdc Allow null Attribute for value when building GlobalOp.
PiperOrigin-RevId: 270853596
2019-09-24 01:19:53 -07:00
Mahesh Ravishankar 69af468754 Make spirv::RuntimeArrayType part of spirv::CompositeType.
According to SPIR-V spec, spirv::CompositeType includes
spirv::RuntimeArrayType. This allows using objects of
spirv::RuntimeArrayType with spirv::AccessChainOp.
PiperOrigin-RevId: 270809492
2019-09-23 18:50:47 -07:00
Lei Zhang 0e7edcfe7e Let mlir-translate support -split-input-file
Similar to mlir-opt, having a -split-input-file mode is quite useful
in mlir-translate. It allows to put logically related tests in the
same test file for better organization.

PiperOrigin-RevId: 270805467
2019-09-23 18:18:23 -07:00
Mahesh Ravishankar 75906bd565 Handle OpMemberName instruction in SPIR-V deserializer.
Sdd support in deserializer for OpMemberName instruction. For now
the name is just processed and not associated with the
spirv::StructType being built. That needs an enhancement to
spirv::StructTypes itself.
Add tests to check for errors reported during deserialization with
some refactoring to common out some utility functions.
PiperOrigin-RevId: 270794524
2019-09-23 17:11:18 -07:00
Jacques Pienaar 4a862fbd63 Use constant's location for reporting errors in parsing of hex constant
Before this the line following the error would be reported in some cases.

PiperOrigin-RevId: 270778722
2019-09-23 15:51:42 -07:00
Mahesh Ravishankar 98d1d3fc43 Simplify the way spirv::StructTypes are parsed.
The existing logic to parse spirv::StructTypes is very brittle. This
change simplifies the parsing logic a lot. The simplification also
allows for memberdecorations to be separated by commas instead of
spaces (which was an artifact of the existing parsing logic). The
change also needs a modification to mlir::parseType to return the
number of chars parsed. Adding a new parseType method to do so.

Also allow specification of spirv::StructType with no members.

PiperOrigin-RevId: 270739672
2019-09-23 12:53:06 -07:00
Mehdi Amini 5583252173 Add convenience methods to set an OpBuilder insertion point after an Operation (NFC)
PiperOrigin-RevId: 270727180
2019-09-23 11:54:55 -07:00
River Riddle 8cb405a8be Add initial callgraph support.
Using the two call interfaces, CallOpInterface and CallableOpInterface, this change adds support for an initial multi-level CallGraph. This call graph builds a set of nodes for each callable region, and connects them via edges. An edge may be any of the following types:
* Abstract
  - An edge not produced by a call operation, used for connecting to internal nodes from external nodes.
* Call
  - A call edge is an edge defined via a call-like operation.
* Child
  - This is an artificial edge connecting nested callgraph nodes.

This callgraph will be used, and improved upon, to begin supporting more interesting interprocedural analyses and transformation. In a followup, this callgraph will be used to support more complex inlining support.

PiperOrigin-RevId: 270724968
2019-09-23 11:44:13 -07:00
River Riddle 8965011fad Add interfaces for call-like/callable operations.
These two operation interfaces will be used in a followup to support building a callgraph:
* CallOpInterface
  - Operations providing this interface are call-like, and have a "call" target. A call target may be a symbol reference, via SymbolRefAttr, or a SSA value.

* CallableOpInterface
  - Operations providing this interfaces define destinations to call-like operations, e.g. FuncOp. These operations may define any number of callable regions.

PiperOrigin-RevId: 270723300
2019-09-23 11:37:06 -07:00
River Riddle c61991ef01 Refactor DiagnosticEngine to support multiple registered diagnostic handlers.
This fixes a problem with current save-restore pattern of diagnostics handlers, as there may be a thread race between when the previous handler is destroyed. For example, this occurs when using multiple ParallelDiagnosticHandlers asynchronously:

Handler A
Handler B | - LifeTime - |    Restore A here.
Handler C | --- LifeTime ---| Restore B after it has been destroyed.

The new design allows for multiple handlers to be registered in a stack like fashion. Handlers can return success() to signal that they have fully processed a diagnostic, or failure to propagate otherwise.

PiperOrigin-RevId: 270720625
2019-09-23 11:25:14 -07:00
River Riddle 4b6b58ec0f NFC: Fix warning for uninitialized field.
PiperOrigin-RevId: 270704572
2019-09-23 10:20:13 -07:00
Jacques Pienaar 59e3b30af0 Add variants of interleave that take separator
Make the common case of string separator easier to specify.

PiperOrigin-RevId: 270697581
2019-09-23 09:50:10 -07:00
Christian Sigg b8676da1fc Outline GPU kernel function into a nested module.
Roll forward of commit 5684a12.

When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.

PiperOrigin-RevId: 270639748
2019-09-23 03:17:01 -07:00
Christian Sigg c900d4994e Fix a number of Clang-Tidy warnings.
PiperOrigin-RevId: 270632324
2019-09-23 02:34:27 -07:00
Denis Khalikov 6414c08556 Fix undefined reference to mlir::getElementTypeOrSelf(mlir::Type)
Fix undefined reference:
mlir/lib/Dialect/StandardOps/Ops.cpp:2029:
undefined reference to `mlir::getElementTypeOrSelf(mlir::Type)'

Closes tensorflow/mlir#144

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/144 from denis0x0D:sandbox/fix_undef 494d4f7fa2e98ba21954d2b2f7ec1776b9397e08
PiperOrigin-RevId: 270545190
2019-09-22 09:08:56 -07:00
Manuel Freiberger 2c11997d48 Add integer sign- and zero-extension and truncation to standard.
This adds sign- and zero-extension and truncation of integer types to the
standard dialects. This allows to perform integer type conversions without
having to go to the LLVM dialect and introduce custom type casts (between
standard and LLVM integer types).

Closes tensorflow/mlir#134

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/134 from ombre5733:sext-zext-trunc-in-std c7657bc84c0ca66b304e53ec03797e09152e4d31
PiperOrigin-RevId: 270479722
2019-09-21 16:14:56 -07:00
Denis Khalikov 2ec8e2be1f [spirv] Add OpControlBarrier and OpMemoryBarrier.
Add OpControlBarrier and OpMemoryBarrier (de)serialization.

Closes tensorflow/mlir#130

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/130 from denis0x0D:sandbox/memory_barrier 2e3fff16bca44904dc1039592cb9a65d526faea8
PiperOrigin-RevId: 270457478
2019-09-21 10:18:34 -07:00
Uday Bondhugula f559c38c28 Upgrade/fix/simplify store to load forwarding
- fix store to load forwarding for a certain set of cases (where
  forwarding shouldn't have happened); use AffineValueMap difference
  based MemRefAccess equality checking; utility logic is also greatly
  simplified

- add missing equality/inequality operators for AffineExpr ==/!= ints

- add == != operators on MemRefAccess

Closes tensorflow/mlir#136

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/136 from bondhugula:store-load-forwarding d79fd1add8bcfbd9fa71d841a6a9905340dcd792
PiperOrigin-RevId: 270457011
2019-09-21 10:08:56 -07:00
Christian Sigg 33a3a91ba2 Make GlobalOp's value attribute optional.
Make GlobalOp's value attribute an OptionalAttr. Change code that uses the value to handle 'nullopt'. Translate an unitialized value attribute to llvm::UndefValue.

PiperOrigin-RevId: 270423646
2019-09-21 01:20:28 -07:00
River Riddle 3a643de92b NFC: Pass OpAsmPrinter by reference instead of by pointer.
MLIR follows the LLVM style of pass-by-reference.

PiperOrigin-RevId: 270401378
2019-09-20 20:43:35 -07:00
River Riddle 729727ebc7 NFC: Pass OperationState by reference instead of by pointer.
MLIR follows the LLVM convention of passing by reference instead of by pointer.

PiperOrigin-RevId: 270396945
2019-09-20 19:47:32 -07:00
River Riddle 91125d33ed Avoid iterator invalidation when recursively computing pattern depth.
computeDepth calls itself recursively, which may insert into minPatternDepth. minPatternDepth is a DenseMap, which invalidates iterators on insertion, so this may lead to asan failures.

PiperOrigin-RevId: 270374203
2019-09-20 16:30:29 -07:00
River Riddle 2797517ecf NFC: Pass OpAsmParser by reference instead of by pointer.
MLIR follows the LLVM style of pass-by-reference.

PiperOrigin-RevId: 270315612
2019-09-20 11:37:21 -07:00
Nicolas Vasilache d8fda38cea Use SmallVectorImpl in getStrides
No need to force a particular size on the user of the API.

PiperOrigin-RevId: 270310570
2019-09-20 11:13:58 -07:00
Nicolas Vasilache a00b568277 Add utility to extract strides from layout map in MemRefType.
The RFC for unifying Linalg and Affine compilation passes into an end-to-end flow discusses the notion of a strided MemRef (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).

This CL adds helper functions to extract strides from the layout map which in turn will allow converting between a strided form of the type and a layout map.

For now strides are only computed on a single affine map with a single result (i.e. the closed subset of linearization maps that are compatible with striding semantics). This restriction will be reevaluated / lifted in the future based on concrete use cases.

PiperOrigin-RevId: 270284686
2019-09-20 09:26:21 -07:00
Mahesh Ravishankar 9a4f5d2ee3 Allow specification of decorators on SPIR-V StructType members.
Allow specification of decorators on SPIR-V StructType members. If the
struct has layout information, these decorations are to be specified
after the offset specification of the member. These decorations are
emitted as OpMemberDecorate instructions on the struct <id>. Update
(de)serialization to handle these decorations.

PiperOrigin-RevId: 270130136
2019-09-19 14:50:05 -07:00
George Karpenkov 2df646bef6 Automated rollback of commit 5684a12434
PiperOrigin-RevId: 270126672
2019-09-19 14:34:30 -07:00
Feng Liu c8961d408e Quantize attribute values by per axis quantization parameters
A new converter with per axis quantization parameters is added to quantize a
dense elements attribute. For each slice along the quantization axis, it
creates an uniform quantized value converter, with different scale and zero
point, and quantizes the values in the slice.

The current implementation doesn't handle sparse elements attributes.

PiperOrigin-RevId: 270121986
2019-09-19 14:12:08 -07:00
MLIR Team e79bfefb89 Add address space attribute to LLVMIR's GlobalOp.
PiperOrigin-RevId: 270012505
2019-09-19 04:50:46 -07:00
MLIR Team 5684a12434 Outline GPU kernel function into a nested module.
When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.

PiperOrigin-RevId: 269987720
2019-09-19 01:51:28 -07:00
River Riddle 25f0f769aa NFC: Remove stray logging from ~Block().
PiperOrigin-RevId: 269941815
2019-09-18 19:21:05 -07:00
River Riddle 35df51086a Fix nested dominance relationship between parent results and child operations.
This modifies DominanceInfo::properlyDominates(Value *value, Operation *op) to return false if the value is defined by a parent operation of 'op'. This prevents using values defined by the parent operation from within any child regions.

PiperOrigin-RevId: 269934920
2019-09-18 18:23:41 -07:00
Uday Bondhugula 727a50ae2d Support symbolic operands for memref replacement; fix memrefNormalize
- allow symbols in index remapping provided for memref replacement
- fix memref normalize crash on cases with layout maps with symbols

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Reported by: Alex Zinenko

Closes tensorflow/mlir#139

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/139 from bondhugula:memref-rep-symbols 2f48c1fdb5d4c58915bbddbd9f07b18541819233
PiperOrigin-RevId: 269851182
2019-09-18 11:26:11 -07:00
MLIR Team 1c73be76d8 Unify error messages to start with lower-case.
PiperOrigin-RevId: 269803466
2019-09-18 07:45:17 -07:00
Alex Zinenko 5709aeb993 SDBM: support sum expressions on the LHS of stripe expressions
Introduce support for applying the stripe operator to sum expressions, as in
  (x + A) # B = x + A - (x + A) mod B.
This is required to represent a combination of tiling and padding in the SDBM
framework, and is a valid SDBM construct that was not originally supported.

PiperOrigin-RevId: 269758807
2019-09-18 02:17:34 -07:00
Alex Zinenko a15e0ce1ba Simplify SDBM expressions more aggressively in operators and conversions
Extend SDBM simplification patterns to support more cases where the addition of
two expressions each involving one or two variables would result in a sum
expression that only contains one variable and thus remains in the SDBM domain.
This is made possible by the new canonical structure of SDBM where the constant
term appears once.  This simplification will be necessary to support
round-tripping of stripe expressions containing constant terms on the LHS
through affine expressions.

PiperOrigin-RevId: 269757732
2019-09-18 02:09:08 -07:00
River Riddle b58d9aee11 Add support to OpAsmParser for parsing unknown keywords.
This is useful in several cases, for example a user may want to sugar the syntax of a string(as we do with custom operation syntax), or avoid many nested ifs for  parsing a set of known keywords.

PiperOrigin-RevId: 269695451
2019-09-17 17:55:34 -07:00
Mahesh Ravishankar 9330c1b9a1 Add (de)serialization support for OpRuntimeArray.
Update the SPIR-V (de)serialization to handle RuntimeArrayType.

PiperOrigin-RevId: 269667196
2019-09-17 15:21:57 -07:00
Lei Zhang af45ca844f Register a -test-spirv-roundtrip hook to mlir-translate
This CL registers a new mlir-translate hook, -test-spirv-roundtrip,
for testing SPIR-V serialization and deserialization round-trip.

This CL also moves the existing -serialize-spirv and
-deserialize-spirv hooks to one source file.

PiperOrigin-RevId: 269659528
2019-09-17 14:48:24 -07:00
Lei Zhang b991e8b1e4 Support file-to-file translation in mlir-translate
Existing translations are either from MLIR or to MLIR. To support
cases like round-tripping some external format via MLIR, one must
chain two mlir-translate invocations together using pipes. This
can be problematic to support -split-input-file in mlir-translate
given that it won't work across pipes.

Motivated by the above, this CL adds another translation category
that allows file to file. This gives users more freedom.

PiperOrigin-RevId: 269636438
2019-09-17 13:04:34 -07:00
Lei Zhang b00a522b80 Change MLIR translation functions signature
This CL changes translation functions to take MemoryBuffer
as input and raw_ostream as output. It is generally better to
avoid handling files directly in a library (unless the library
is specifically for file manipulation) and we can unify all
file handling to the mlir-translate binary itself.

PiperOrigin-RevId: 269625911
2019-09-17 12:16:45 -07:00
Uday Bondhugula bd7de6d4df Add rewrite pattern to compose maps into affine load/stores
- add canonicalization pattern to compose maps into affine loads/stores;
  templatize the pattern and reuse it for affine.apply as well

- rename getIndices -> getMapOperands() (getIndices is confusing since
  these are no longer the indices themselves but operands to the map
  whose results are the indices). This also makes the accessor uniform
  across affine.apply/load/store. Change arg names on the affine
  load/store builder to avoid confusion. Drop an unused confusing build
  method on AffineStoreOp.

- update incomplete doc comment for canonicalizeMapAndOperands (this was
  missed from a previous update).

Addresses issue tensorflow/mlir#121

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#122

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/122 from bondhugula:compose-load-store e71de1771e56a85c4282c10cb43f30cef0701c4f
PiperOrigin-RevId: 269619540
2019-09-17 11:49:45 -07:00
Mehdi Amini 62e1faa6f6 Add missing CMake dependency from libAnalysis to the Vector dialect
Fixes tensorflow/mlir#138

PiperOrigin-RevId: 269509668
2019-09-17 00:39:00 -07:00
Mahesh Ravishankar 2d86ad79f0 Autogenerate (de)serialization for Extended Instruction Sets
A generic mechanism for (de)serialization of extended instruction sets
is added with this CL. To facilitate this, a new class
"SPV_ExtendedInstSetOp" is added which is a base class for all
operations corresponding to extended instruction sets. The methods to
(de)serialization such ops as well as its dispatch is generated
automatically.

The behavior controlled by autogenSerialization and hasOpcode is also
slightly modified to enable this. They are now decoupled.
1) Setting hasOpcode=1 means the operation has a corresponding
   opcode in SPIR-V binary format, and its dispatch for
   (de)serialization is automatically generated.
2) Setting autogenSerialization=1 generates the function for
   (de)serialization automatically.
So now it is possible to have hasOpcode=0 and autogenSerialization=1
(for example SPV_ExtendedInstSetOp).

Since the dispatch functions is also auto-generated, the input file
needs to contain all operations. To this effect, SPIRVGLSLOps.td is
included into SPIRVOps.td. This makes the previously added
SPIRVGLSLOps.h and SPIRVGLSLOps.cpp unnecessary, and are deleted.

The SPIRVUtilsGen.cpp is also changed to make better use of
formatv,making the code more readable.

PiperOrigin-RevId: 269456263
2019-09-16 17:12:33 -07:00
Denis Khalikov 8a34d5d18c [spirv] Add support for function calls.
Add spv.FunctionCall operation and (de)serialization.

Closes tensorflow/mlir#137

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/137 from denis0x0D:sandbox/function_call_op e2e6f07d21e7f23e8b44c7df8a8ab784f3356ce4
PiperOrigin-RevId: 269437167
2019-09-16 15:39:54 -07:00
River Riddle 9619ba10d4 Add support for multi-level value mapping to DialectConversion.
When performing A->B->C conversion, an operation may still refer to an operand of A. This makes it necessary to unmap through multiple levels of replacement for a specific value.

PiperOrigin-RevId: 269367859
2019-09-16 10:38:19 -07:00
Lei Zhang 6934a337f0 [spirv] Add support for BitEnumAttr
Certain enum classes in SPIR-V, like function/loop control and memory
access, are bitmasks. This CL introduces a BitEnumAttr to properly
model this and drive auto-generation of verification code and utility
functions. We still store the attribute using an 32-bit IntegerAttr
for minimal memory footprint and easy (de)serialization. But utility
conversion functions are adjusted to inspect each bit and generate
"|"-concatenated strings for the bits; vice versa.

Each such enum class has a "None" case that means no bit is set. We
need special handling for "None". Because of this, the logic is not
general anymore. So right now the definition is placed in the SPIR-V
dialect. If later this turns out to be useful for other dialects,
then we can see how to properly adjust it and move to OpBase.td.

Added tests for SPV_MemoryAccess to check and demonstrate.

PiperOrigin-RevId: 269350620
2019-09-16 09:23:22 -07:00
Alex Zinenko cb3ecb5291 Overhaul the SDBM expression kind hierarchy
Swap the allowed nesting of sum and diff expressions: now a diff expression can
contain a sum expression, but only on the left hand side.  A difference of two
expressions sum must be canonicalized by grouping their constant terms in a
single expression.  This change of sturcture became possible thanks to the
introduction of the "direct" super-kind.  It is necessary to enable support of
sum expressions on the left hand side of the stripe expression.

SDBM expressions are now grouped into the following structure
- expression
  - varying
    - direct
      - sum <- (term, constant)
      - term
        - symbol
        - dimension
        - stripe <- (term, constant)
    - negation <- (direct)
    - difference <- (direct, term)
  - constant
The notation <- (...) denotes the types of subexpressions a compound
expression can combine.

PiperOrigin-RevId: 269337222
2019-09-16 08:16:06 -07:00
MLIR Team 0ce64b0bf3 Unify how errors are emitted in LaunchFuncOp verification.
PiperOrigin-RevId: 269331869
2019-09-16 07:45:59 -07:00
MLIR Team 1da0290c4b Error out when kernel function is not found while translating GPU calls.
PiperOrigin-RevId: 269327909
2019-09-16 07:19:36 -07:00
Alex Zinenko 6755dfdec9 Drop makePositionAttr and the like in favor of Builder::getI64ArrayAttr
The helper functions makePositionAttr() and positionAttr() were originally
introduced in the lowering-to-LLVM-dialect pass to construct integer array
attributes that are used for static positions in extract/insertelement.
Constructing an integer array attribute being fairly common, a utility function
Builder::getI64ArrayAttr was later introduced into the Builder API.  Drop
makePositionAttr and similar homegrown functions and use that API instead.
PiperOrigin-RevId: 269295836
2019-09-16 03:31:09 -07:00
Mahesh Ravishankar 9814b3fa0d Add mechanism to specify extended instruction sets in SPIR-V.
Add support for specifying extended instructions sets. The operations
in SPIR-V dialect are named as 'spv.<extension-name>.<op-name>'. Use
this mechanism to define a 'Exp' operation from GLSL(450)
instructions.
Later CLs will add support for (de)serialization of these operations,
and update the dialect generation scripts to auto-generate the
specification using the spec directly.

Additional changes:
Add a Type Constraint to OpBase.td to check for vector of specified
lengths. This is used to check that the vector type used in SPIR-V
dialect are of lengths 2, 3 or 4.
Update SPIRVBase.td to use this Type constraints for vectors.

PiperOrigin-RevId: 269234377
2019-09-15 19:40:07 -07:00
River Riddle d37777c440 Update the IRPrinter instrumentation to work on non function/module operations.
This is necessary now that the pass manager may work on different types of operations.

PiperOrigin-RevId: 269139669
2019-09-14 21:56:38 -07:00
River Riddle bbe65b46f5 NFC: Pass PassInstrumentations by unique_ptr instead of raw pointer.
This makes the ownership model explicit, and removes potential user errors.

PiperOrigin-RevId: 269122834
2019-09-14 17:44:50 -07:00
River Riddle cb1bcba69b NFC: Merge OpPass with OperationPass into just OperationPass.
OperationPass' are defined exactly the same way as they are now:
   class DerivedPass :  public OperationPass<DerivedPass>;

OpPass' are now defined as OperationPass, but with an additional template parameter for the operation type:
   class DerivedPass :  public OperationPass<DerivedPass, FuncOp>;

PiperOrigin-RevId: 269122410
2019-09-14 17:37:43 -07:00
Jing Pu 38e7226606 Add convenience methods to create i8 and i16 attributes in Builder.
PiperOrigin-RevId: 269120226
2019-09-14 17:02:54 -07:00
Uday Bondhugula 4f32ae61b4 NFC - Move explicit copy/dma generation utility out of pass and into LoopUtils
- turn copy/dma generation method into a utility in LoopUtils, allowing
  it to be reused elsewhere.

- no functional/logic change to the pass/utility

- trim down header includes in files affected

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#124

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/124 from bondhugula:datacopy 9f346e62e5bd9dd1986720a30a35f302eb4d3252
PiperOrigin-RevId: 269106088
2019-09-14 13:23:48 -07:00
Uday Bondhugula 1366467a3b update normalizeMemRef utility; handle missing failure check + add more tests
- take care of symbolic operands with alloc
- add missing check for compose map failure and a test case
- add test cases on strides
- drop incorrect check for one-to-one'ness

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#132

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/132 from bondhugula:normalize-memrefs 8aebf285fb0d7c19269d85255aed644657e327b7
PiperOrigin-RevId: 269105947
2019-09-14 13:21:35 -07:00
Uday Bondhugula 018cfa94d9 Clean up build trip count analysis method - avoid mutating IR
- NFC - on any pass/utility logic/output.

- Resolve TODO; the method building loop trip count maps was
  creating and deleting affine.apply ops (transforming IR from under
  analysis!, strictly speaking). Introduce AffineValueMap::difference to
  do this correctly (without the need to create any IR).

- Move AffineApplyNormalizer out so that its methods are reusable from
  AffineStructures.cpp; add a helper method 'normalize' to it. Fix
  AffineApplyNormalize::renumberOneDim (Issue tensorflow/mlir#89).

- Trim includes on files touched.

- add test case on a scenario previously not covered

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#133

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/133 from bondhugula:trip-count-build 7fc34d857f7788f98b641792cafad6f5bd50e47b
PiperOrigin-RevId: 269101118
2019-09-14 12:10:55 -07:00
River Riddle 2de18fb84d NFC: Fix stray character in error message: 1 -> '
PiperOrigin-RevId: 269091468
2019-09-14 09:44:23 -07:00
Uday Bondhugula f2eb0f02fa Add pattern to canonicalize for loop bounds
- add pattern to canonicalize affine.for loop bounds (using
  canonicalizeMapAndOperands)
- rename AffineForLoopBoundFolder -> AffineForLoopBoundFolder for
  consistency

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#111

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/111 from bondhugula:bound-canonicalize ee8fb7f43a7ffd45f6df3f53c95098d8b7e494c7
PiperOrigin-RevId: 269041220
2019-09-13 22:11:56 -07:00
River Riddle 4e48beadbb Verify that ModuleOps only contain dialect specific attributes.
ModuleOp has no expected operations, so only dialect-specific attributes are valid.

PiperOrigin-RevId: 269020062
2019-09-13 18:19:33 -07:00
Uday Bondhugula 1e6a93b7ca add missing memref cast fold pattern for dim op
- add missing canonicalization pattern to fold memref_cast + dim to
  dim (needed to propagate constant when folding a dynamic shape to
  a static one)

- also fix an outdated/inconsistent comment in StandardOps/Ops.td

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#126

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/126 from bondhugula:quickfix 4566e75e49685c532faffff91d64c5d83d4da524
PiperOrigin-RevId: 269020058
2019-09-13 18:18:48 -07:00
River Riddle d780bdef20 Publicly expose the functionality to parse a textual pass pipeline.
This allows for users other than those on the command line to apply a textual description of a pipeline to a given pass manager.

PiperOrigin-RevId: 269017028
2019-09-13 17:54:00 -07:00
Lei Zhang 113aadddf9 Update SPIR-V symbols and use GLSL450 instead of VulkanKHR
SPIR-V recently publishes v1.5, which brings a bunch of symbols
into core. So the suffix "KHR"/"EXT"/etc. is removed from the
symbols. We use a script to pull information from the spec
directly.

Also changed conversion and tests to use GLSL450 instead of
VulkanKHR memory model. GLSL450 is still the main memory model
supported by Vulkan shaders and it does not require extra
capability to enable.

PiperOrigin-RevId: 268992661
2019-09-13 15:26:32 -07:00
River Riddle f1b100c77b NFC: Finish replacing FunctionPassBase/ModulePassBase with OpPassBase.
These directives were temporary during the generalization of FunctionPass/ModulePass to OpPass.

PiperOrigin-RevId: 268970259
2019-09-13 13:34:27 -07:00
River Riddle 8a1cdeb31b Forward diagnostics from untracked threads in ParallelDiagnosticHandler.
This allows for the use of multiple ParallelDiagnosticHandlers without having them conflict with each other.

PiperOrigin-RevId: 268967407
2019-09-13 13:19:19 -07:00
River Riddle 9274ed66ef Refactor pass pipeline command line parsing to support explicit pipeline strings.
This allows for explicitly specifying the pipeline to add to the pass manager. This includes the nesting structure, as well as the passes/pipelines to run. A textual pipeline string is defined as a series of names, each of which may in itself recursively contain a nested pipeline description. A name is either the name of a registered pass, or pass pipeline, (e.g. "cse") or the name of an operation type (e.g. "func").

For example, the following pipeline:
$ mlir-opt foo.mlir -cse -canonicalize -lower-to-llvm

Could now be specified as:
$ mlir-opt foo.mlir -pass-pipeline='func(cse, canonicalize), lower-to-llvm'

This will allow for running pipelines on nested operations, like say spirv modules. This does not remove any of the current functionality, and in fact can be used in unison. The new option is available via 'pass-pipeline'.

PiperOrigin-RevId: 268954279
2019-09-13 12:10:31 -07:00
Smit Hinsu 1854c64c7c Log name of the generated illegal operation name in DialectConversion debug mode
PiperOrigin-RevId: 268859399
2019-09-13 01:37:38 -07:00
Geoffrey Martin-Noble 2ccbb3f1ce Cmpf constant folding for nan and inf
PiperOrigin-RevId: 268783645
2019-09-12 15:43:59 -07:00
Lei Zhang a84bc68acc [spirv] Add support for spv.loop (de)serialization
This CL adds support for serializing and deserializing spv.loop ops.
This adds support for spv.Branch and spv.BranchConditional op
(de)serialization, too, because they are needed for spv.loop.

PiperOrigin-RevId: 268536962
2019-09-11 14:02:59 -07:00
Alex Zinenko e15356f8ed Rename SDBMPositiveExpr to SDBMTermExpr
This better reflects how this kind of expressions is used and avoids the
potential confusion since the expression can take negative values.  Term
expressions comprise dimensions, symbols and stripe expressions.  In an SDBM
domain, a stripe expression always corresponds to a variable, input or
temporary.  This expression can appear anywhere an input variable can,
including on the LHS of other stripe expressions.

PiperOrigin-RevId: 268486066
2019-09-11 10:18:29 -07:00
MLIR Team d732aaf2cb Don't leak TargetMachine in ExecutionEngine::setupTargetTriple
PiperOrigin-RevId: 268361054
2019-09-10 19:03:21 -07:00
Lei Zhang ee8cbccacf Add folding rule for spv.CompositeExtract
If the composite is a constant, we can fold it away. This only
supports vector and array constants for now, given that struct
constant is not supported in spv.constant yet.

PiperOrigin-RevId: 268350340
2019-09-10 17:48:24 -07:00
Feng Liu cf0a782339 Remove the constraint that min / max should stride zero
Since we apply nudging for the zero point to make sure the nudged zerop points
can be in the range of [qmin, qmax], the constraint that rmin / rmax should
stride zero isn't necessary.

This also matches the documentation of tensorflow's FakeQuantWithMinMaxArgs op,
where min and max don't need to stride zero:
https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args

PiperOrigin-RevId: 268296285
2019-09-10 13:26:46 -07:00
Feng Liu c68d5467d6 Convert ConstFakeQuantPerAxis to qcast and dcast pair
This is also to add the test to the fakeQuantAttrsToType for per-channel fake quant.

PiperOrigin-RevId: 268260032
2019-09-10 10:50:57 -07:00
Jacques Pienaar 277b6136ee Remove unused variable
PiperOrigin-RevId: 268173638
2019-09-10 01:31:17 -07:00
Jacques Pienaar a23f69a37b Remove redundant qualification
Address GCC error: extra qualification not allowed [-fpermissive]

PiperOrigin-RevId: 268133737
2019-09-09 19:50:53 -07:00
Jacques Pienaar 2660623a88 Add pass generate per block in a function a GraphViz Dot graph with ops as nodes
* Add GraphTraits that treat a block as a graph, Operation* as node and use-relationship for edges;
  - Just basic graph output;
* Add use iterator to iterate over all uses of an Operation;
* Add testing pass to generate op graph;

This does not support arbitrary operations other than function nor nested regions yet.

PiperOrigin-RevId: 268121782
2019-09-09 18:12:41 -07:00
Feng Liu d3a6dbc0b8 [NFC] Rename ExpressedToUniformQuantizedType to ExpressedToQuantizedType
PiperOrigin-RevId: 268090906
2019-09-09 15:29:59 -07:00
Feng Liu 27d776fa6d Convert per channel fake quant attributes to type
For per channel fake quant attributes, the returned type should be
UniformQuantizedPerAxisType. Currently, this method isn't under test because we
haven't added the quant_ConstFakeQuantPerAxis op and the convert method.

PiperOrigin-RevId: 268084017
2019-09-09 14:57:59 -07:00
River Riddle 893c86fff7 Explicitly declare the OpPassManager move constructor to avoid undefined errors.
Some compilers will try to auto-generate the destructor, instead of using the user provided destructor, when creating a default move constructor.

PiperOrigin-RevId: 268067367
2019-09-09 13:44:24 -07:00
MLIR Team 36508528c7 Overload LLVM::TerminatorOp::build() for empty operands list.
PiperOrigin-RevId: 268041584
2019-09-09 11:39:03 -07:00
River Riddle e702875d16 Add support for coalescing adjacent nested pass pipelines.
This allows for parallelizing across pipelines of multiple operation types. AdaptorPasses can now hold pass managers for multiple operation types and will dispatch based upon the operation being operated on.

PiperOrigin-RevId: 268017344
2019-09-09 09:52:25 -07:00
Stephan Herhut 318ff019cf Addressing some late review comments on kernel inlining.
Just formatting and better lit tests, no functional change.

PiperOrigin-RevId: 267942907
2019-09-09 01:15:47 -07:00
Mehdi Amini 42b60d34fc Add `parseGenericOperation()` to the OpAsmParser
This method parses an operation in its generic form, from the current parser
state. This is the symmetric of OpAsmPrinter::printGenericOp(). An immediate
use case is illustrated in the test dialect, where an operation wraps another
one in its region and makes use of a single-line pretty-print form.

PiperOrigin-RevId: 267930869
2019-09-08 23:40:12 -07:00
River Riddle 120509a6b2 Refactor PassTiming to support nested pipelines.
This is done via a new set of instrumentation hooks runBeforePipeline/runAfterPipeline, that signal the lifetime of a pass pipeline on a specific operation type. These hooks also provide the parent thread of the pipeline, allowing for accurate merging of timers running on different threads.

PiperOrigin-RevId: 267909193
2019-09-08 19:58:13 -07:00
Mehdi Amini 6443583bfd Refactor getUsedValuesDefinedAbove to expose a variant taking a callback (NFC)
This will allow clients to implement a different collection strategy on these
values, including collecting each uses within the region for example.

PiperOrigin-RevId: 267803978
2019-09-07 17:03:01 -07:00
Uday Bondhugula 713ab0dde7 Set mlir-cpu-runner JIT codegen opt level correctly
- the JIT codegen was being run at the default -O0 level; instead,
  propagate the opt level from the cmd line.

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#123

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/123 from bondhugula:jit-runner 3b055e47f94c9a48bf487f6400787478738cda02
PiperOrigin-RevId: 267778586
2019-09-07 10:00:25 -07:00
Mehdi Amini 53bb528b19 Wrap debug dump in LLVM_DEBUG
PiperOrigin-RevId: 267774506
2019-09-07 08:53:52 -07:00
River Riddle b78410fd81 Restrict affine inlining to just Function operations.
The current restrictions on dim/symbols require a top-level symbol for the conservative case of a non-affine region. This should be relaxed in the future.

PiperOrigin-RevId: 267641838
2019-09-06 11:44:19 -07:00
Nagy Mostafa 8154370b49 Add custom builder for AffineIfOp
Closes tensorflow/mlir#109

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/109 from nmostafa:nmostafa/AffineIfOp 7dbf2115f0092ffab26381ea8704aa05a0253971
PiperOrigin-RevId: 267633077
2019-09-06 11:03:03 -07:00
Nicolas Vasilache 1b8eff8fcd Simplify Linalg ABI integration with external function calls.
View descriptors are converted to *pointer to* LLVM struct to avoid ABI issues related to C struct packing. This creates unnecessary complexity and hampers unification with memrefs.
Instead, this CL makes view descriptors convert to LLVM struct (as it was originally) and promotes all structs to pointers right before calling an external function.

PiperOrigin-RevId: 267602693
2019-09-06 08:31:19 -07:00
Uday Bondhugula 854a384f50 Integer set + operands / affine if op canonicalization
- turn canonicalizeMapAndOperands into a template that works on both
  sets and maps, and use it to introduce a utility to canonicalize an
  affine integer set and its operands
- add pattern to canonicalize affine if op's.
- rename IntegerSet::getNumOperands -> IntegerSet::getNumInputs to be
  consistent with AffineMap
- add missing accessors for IntegerSet

Doesn't need extensive testing since canonicalizeSetAndOperands just
reuses canonicalizeMapAndOperands' logic, and the latter is tested on
affine.apply map + operands; the new method works the same way on an
integer set + operands of an affine if op for example.

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#112

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/112 from bondhugula:set-canonicalize eff72f23250b96fa7d9f5caff3877440f5de2cec
PiperOrigin-RevId: 267532876
2019-09-05 23:12:35 -07:00
River Riddle 85bc4889b3 Add support for conservatively inlining Affine operations.
This commit defines an initial implementation of the DialectInlinerInterface for the AffineOps dialect. This change allows for affine operations to be inlined into any region that is not an affine region. Inlining into affine regions requires special handling for dimension/symbol identifiers that will be added in followups.

PiperOrigin-RevId: 267467078
2019-09-05 15:20:25 -07:00
Lei Zhang 916eb980b0 [spirv] Add spv.loop
SPIR-V can explicitly declare structured control-flow constructs using merge
instructions. These explicitly declare a header block before the control
flow diverges and a merge block where control flow subsequently converges.
These blocks delimit constructs that must nest, and can only be entered
and exited in structured ways.

Instead of having a `spv.LoopMerge` op to directly model loop merge
instruction for indicating the merge and continue target, we use regions
to delimit the boundary of the loop: the merge target is the next op
following the `spv.loop` op and the continue target is the block that
has a back-edge pointing to the entry block inside the `spv.loop`'s region.
This way it's easier to discover all blocks belonging to a construct and
it plays nicer with the MLIR system.

Updated the SPIR-V.md doc.

PiperOrigin-RevId: 267431010
2019-09-05 12:45:53 -07:00
River Riddle 0ba0087887 Add the initial inlining infrastructure.
This defines a set of initial utilities for inlining a region(or a FuncOp), and defines a simple inliner pass for testing purposes.
A new dialect interface is defined, DialectInlinerInterface, that allows for dialects to override hooks controlling inlining legality. The interface currently provides the following hooks, but these are just premilinary and should be changed/added to/modified as necessary:

* isLegalToInline
  - Determine if a region can be inlined into one of this dialect, *or* if an operation of this dialect can be inlined into a given region.

* shouldAnalyzeRecursively
  - Determine if an operation with regions should be analyzed recursively for legality. This allows for child operations to be closed off from the legality checks for operations like lambdas.

* handleTerminator
  - Process a terminator that has been inlined.

This cl adds support for inlining StandardOps, but other dialects will be added in followups as necessary.

PiperOrigin-RevId: 267426759
2019-09-05 12:24:13 -07:00
Nicolas Vasilache cf26e5faf5 Use transform function on llvm::Module in the ExecutionEngine
The refactoring of ExecutionEngine dropped the usage of the irTransform function used to pass -O3 and other options to LLVM. As a consequence, the proper optimizations do not kick in in LLMV-land.

This CL makes use of the transform function and allows producing avx512 instructions, on an internal example, when using:
`mlir-cpu-runner -dump-object-file=1 -object-filename=foo.o` combined with `objdump -D foo.o`.

Assembly produced resembles:
```
    2b2e:       62 72 7d 48 18 04 0e    vbroadcastss (%rsi,%rcx,1),%zmm8
    2b35:       62 71 7c 48 28 ce       vmovaps %zmm6,%zmm9
    2b3b:       62 72 3d 48 a8 c9       vfmadd213ps %zmm1,%zmm8,%zmm9
    2b41:       62 f1 7c 48 28 cf       vmovaps %zmm7,%zmm1
    2b47:       62 f2 3d 48 a8 c8       vfmadd213ps %zmm0,%zmm8,%zmm1
    2b4d:       62 f2 7d 48 18 44 0e    vbroadcastss 0x4(%rsi,%rcx,1),%zmm0
    2b54:       01
    2b55:       62 71 7c 48 28 c6       vmovaps %zmm6,%zmm8
    2b5b:       62 72 7d 48 a8 c3       vfmadd213ps %zmm3,%zmm0,%zmm8
    2b61:       62 f1 7c 48 28 df       vmovaps %zmm7,%zmm3
    2b67:       62 f2 7d 48 a8 da       vfmadd213ps %zmm2,%zmm0,%zmm3
    2b6d:       62 f2 7d 48 18 44 0e    vbroadcastss 0x8(%rsi,%rcx,1),%zmm0
    2b74:       02
    2b75:       62 f2 7d 48 a8 f5       vfmadd213ps %zmm5,%zmm0,%zmm6
    2b7b:       62 f2 7d 48 a8 fc       vfmadd213ps %zmm4,%zmm0,%zmm7
```
etc.

Fixes tensorflow/mlir#120

PiperOrigin-RevId: 267281097
2019-09-04 19:17:16 -07:00
MLIR Team b5652720c1 Retain address space during MLIR > LLVM conversion.
PiperOrigin-RevId: 267206460
2019-09-04 12:26:52 -07:00
Jacques Pienaar 636bcbade0 Make isIsolatedAbove robuster to invalid IR
This function is only called from the verifier.

PiperOrigin-RevId: 267145495
2019-09-04 07:03:07 -07:00
Uday Bondhugula 8c9dc690eb pipeline-data-transfer: remove dead tag alloc's and improve test coverage for replaceMemRefUsesWith / pipeline-data-transfer
- address remaining comments from PR tensorflow/mlir#87 for better test coverage for
  pipeline-data-transfer/replaceAllMemRefUsesWith
- remove dead tag allocs the same way they are removed for the replaced buffers

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#106

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/106 from bondhugula:followup 9e868666d047e8d43e5f82f43e4093b838c710fa
PiperOrigin-RevId: 267144774
2019-09-04 06:59:09 -07:00
Stephan Herhut dfd06af562 Make GPU kernel outlining inline constants.
It is generally beneficial to pass less arguments to a kernel, so cloning constants
into the kernel is beneficial.

PiperOrigin-RevId: 267139084
2019-09-04 06:16:07 -07:00
MLIR Team 2f13df13b0 Add support for array-typed constants.
PiperOrigin-RevId: 267121729
2019-09-04 03:46:06 -07:00
Nicolas Vasilache 0c8ad3aafb Properly clone Linalg ops with regions
This CL adds support for proper cloning of Linalg ops that have regions (i.e. the generic linalg op). This is used to properly implement tiling and fusion for such ops. Adequate tests are added.

PiperOrigin-RevId: 267027176
2019-09-03 15:28:47 -07:00
Uday Bondhugula 54d674f51e Utility to normalize memrefs with non-identity layout maps
- introduce utility to convert memrefs with non-identity layout maps to
  ones with identity layout maps: convert the type and rewrite/remap all
  its uses

- add this utility to -simplify-affine-structures pass for testing
  purposes

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#104

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/104 from bondhugula:memref-normalize f2c914aa1890e8860326c9e33f9aa160b3d65e6d
PiperOrigin-RevId: 266985317
2019-09-03 12:14:28 -07:00
Lei Zhang 5593e005c6 Add folding rule and dialect materialization hook for spv.constant
This will allow us to use MLIR's folding infrastructure to deduplicate
SPIR-V constants.

This CL also changed isValidSPIRVType in SPIRVDialect to a static method.

PiperOrigin-RevId: 266984403
2019-09-03 12:09:58 -07:00
Uday Bondhugula b1ef9dc22c Fix affine data copy generation corner cases/bugs
- the [begin, end) range identified for copying could end in between the
  block, which makes hoisting invalid in some cases. Change the range
  identification to always end with end of block.

- add test case to exercise these (with fast mem capacity set to minimal so
  that single element memref buffers are generated at the innermost loop)

- the location of begin/end of the block range for data copying was
  being confused with the insert points for copy in and copy out code.
  In cases, where we choose to hoist transfers, these are separate.

- when copy loops are single iteration ones, promote their bodies at
  the end of the pass.

- change default fast mem space to 1 (setting it to zero made it
  generate DMA op's that won't verify in the default case - since the
  DMA ops have a check for src/dest memref spaces being different).

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Co-Authored-By: Mehdi Amini <joker.eph@gmail.com>

Closes tensorflow/mlir#88

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/88 from bondhugula:datacopy 88697267c45e850c3ced87671e16e4a930c02a42
PiperOrigin-RevId: 266980911
2019-09-03 11:53:16 -07:00
River Riddle 61ee7d640c Fix an invalid assert when processing escaped strings.
The assert assumed that the escaped character could not appear at the end of the string.

Fixes tensorflow/mlir#117

PiperOrigin-RevId: 266975471
2019-09-03 11:27:39 -07:00
Alex Torres 6eb910a59c Remove unused variables
Remove unused variables and attributes from BaseViewConversionHelper
on mlir/lib/Dialect/Linalg/Transforms/LowerToLLVMDialect.cpp

Closes tensorflow/mlir#116

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/116 from alexst07:fix-warnings 5f638e4677492cf71a9cc040eeb6b57427d32e06
PiperOrigin-RevId: 266972082
2019-09-03 11:12:58 -07:00
Alex Zinenko c335d9d313 LLVM dialect: prefix auxiliary operations with "mlir."
Some of the operations in the LLVM dialect are required to model the LLVM IR in
MLIR, for example "constant" operations are needed to declare a constant value
since MLIR, unlike LLVM, does not support immediate values as operands.  To
avoid confusion with actual LLVM operations, we prefix such axuiliary
operations with "mlir.".

PiperOrigin-RevId: 266942838
2019-09-03 09:10:56 -07:00
Smit Hinsu da646505c5 Support bf16 in Builder::getZeroAttr
PiperOrigin-RevId: 266863802
2019-09-02 23:44:06 -07:00
Mahesh Ravishankar 2acd0dbf05 Add Select operation to SPIR-V dialect.
The SelectOp models the semantics of OpSelect from SPIR-V spec.

PiperOrigin-RevId: 266849559
2019-09-02 21:07:18 -07:00
River Riddle 5c036e682d Refactor the pass manager to support operations other than FuncOp/ModuleOp.
This change generalizes the structure of the pass manager to allow arbitrary nesting pass managers for other operations, at any level. The only user visible change to existing code is the fact that a PassManager must now provide an MLIRContext on construction. A new class `OpPassManager` has been added that represents a pass manager on a specific operation type. `PassManager` will remain the top-level entry point into the pipeline, with OpPassManagers being nested underneath. OpPassManagers will still be implicitly nested if the operation type on the pass differs from the pass manager. To explicitly build a pipeline, the 'nest' methods on OpPassManager may be used:

// Pass manager for the top-level module.
PassManager pm(ctx);

// Nest a pipeline operating on FuncOp.
OpPassManager &fpm = pm.nest<FuncOp>();
fpm.addPass(...);

// Nest a pipeline under the FuncOp pipeline that operates on spirv::ModuleOp
OpPassManager &spvModulePM = pm.nest<spirv::ModuleOp>();

// Nest a pipeline on FuncOps inside of the spirv::ModuleOp.
OpPassManager &spvFuncPM = spvModulePM.nest<FuncOp>();

To help accomplish this a new general OperationPass is added that operates on opaque Operations. This pass can be inserted in a pass manager of any type to operate on any operation opaquely. An example of this opaque OperationPass is a VerifierPass, that simply runs the verifier opaquely on the current operation.

/// Pass to verify an operation and signal failure if necessary.
class VerifierPass : public OperationPass<VerifierPass> {
  void runOnOperation() override {
    Operation *op = getOperation();
    if (failed(verify(op)))
      signalPassFailure();
    markAllAnalysesPreserved();
  }
};

PiperOrigin-RevId: 266840344
2019-09-02 19:25:26 -07:00
River Riddle 6563b1c446 Add a new dialect interface for the OperationFolder `OpFolderDialectInterface`.
This interface will allow for providing hooks to interrop with operation folding. The first hook, 'shouldMaterializeInto', will allow for controlling which region to insert materialized constants into. The folder will generally materialize constants into the top-level isolated region, this allows for materializing into a lower level ancestor region if it is more profitable/correct.

PiperOrigin-RevId: 266702972
2019-09-01 20:07:08 -07:00
Mehdi Amini ce702fc8da Add a `getUsedValuesDefinedAbove()` overload that takes an `Operation` pointer (NFC)
This is a convenient utility around the existing `getUsedValuesDefinedAbove()`
that take two regions.

PiperOrigin-RevId: 266686854
2019-09-01 16:32:10 -07:00
Mehdi Amini 765d60fd4d Add missing lowering to CFG in mlir-cpu-runner + related cleanup
- the list of passes run by mlir-cpu-runner included -lower-affine and
  -lower-to-llvm but was missing -lower-to-cfg (because -lower-affine at
  some point used to lower straight to CFG); add -lower-to-cfg in
  between. IR with affine ops can now be run by mlir-cpu-runner.

- update -lower-to-cfg to be consistent with other passes (create*Pass methods
  were changed to return unique ptrs, but -lower-to-cfg appears to have been
  missed).

- mlir-cpu-runner was unable to parse custom form of affine op's - fix
  link options

- drop unnecessary run options from test/mlir-cpu-runner/simple.mlir
  (none of the test cases had loops)

- -convert-to-llvmir was changed to -lower-to-llvm at some point, but the
  create pass method name wasn't updated (this pass converts/lowers to LLVM
  dialect as opposed to LLVM IR). Fix this.

(If we prefer "convert", the cmd-line options could be changed to
"-convert-to-llvm/cfg" then.)

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#115

PiperOrigin-RevId: 266666909
2019-09-01 11:33:22 -07:00
River Riddle 9c8a8a7d0d Add a canonicalization to erase empty AffineForOps.
AffineForOp themselves are pure and can be removed if there are no internal operations.

PiperOrigin-RevId: 266481293
2019-08-30 16:49:32 -07:00
River Riddle 1dd9bf4739 Generalize the pass hierarchy by adding a general OpPass<PassT, OpT>.
This pass class generalizes the current functionality between FunctionPass and ModulePass, and allows for operating on any operation type. The pass manager currently only supports OpPasses operating on FuncOp and ModuleOp, but this restriction will be relaxed in follow-up changes. A utility class OpPassBase<OpT> allows for generically referring to operation specific passes: e.g. FunctionPassBase == OpPassBase<FuncOp>.

PiperOrigin-RevId: 266442239
2019-08-30 13:16:37 -07:00
Jacques Pienaar 06e8101034 Add mechanism to dump JIT-compiled objects to files
This commit introduces the bits to be able to dump JIT-compile
objects to external files by passing an object cache to OrcJit.
The new functionality is tested in mlir-cpu-runner under the flag
`dump-object-file`.

Closes tensorflow/mlir#95

PiperOrigin-RevId: 266439265
2019-08-30 13:02:10 -07:00
Rob Suderman 8f90a442c3 Added a TableGen generator for structured data
Similar to enum, added a generator for structured data. This provide Dictionary that stores a fixed set of values and guarantees the values are valid. It is intended to store a fixed number of values by a given name.

PiperOrigin-RevId: 266437460
2019-08-30 12:52:13 -07:00
River Riddle 037742cdf2 Add support for early exit walk methods.
This is done by providing a walk callback that returns a WalkResult. This result is either `advance` or `interrupt`. `advance` means that the walk should continue, whereas `interrupt` signals that the walk should stop immediately. An example is shown below:

auto result = op->walk([](Operation *op) {
  if (some_invariant)
    return WalkResult::interrupt();
  return WalkResult::advance();
});

if (result.wasInterrupted())
  ...;

PiperOrigin-RevId: 266436700
2019-08-30 12:47:53 -07:00
Lei Zhang 4f6c29223e Add spv.Branch and spv.BranchConditional
This CL just covers the op definition, its parsing, printing,
and verification. (De)serialization is to be implemented
in a subsequent CL.

PiperOrigin-RevId: 266431077
2019-08-30 12:17:53 -07:00
River Riddle 3ee3710fd1 Change the parseSource* methods to return OwningModuleRef instead of ModuleOp.
This avoids potential memory leaks from misuse of the API.

PiperOrigin-RevId: 266305750
2019-08-29 22:20:10 -07:00
River Riddle 4bfae66d70 Refactor the 'walk' methods for operations.
This change refactors and cleans up the implementation of the operation walk methods. After this refactoring is that the explicit template parameter for the operation type is no longer needed for the explicit op walks. For example:

    op->walk<AffineForOp>([](AffineForOp op) { ... });

is now accomplished via:

    op->walk([](AffineForOp op) { ... });

PiperOrigin-RevId: 266209552
2019-08-29 13:04:50 -07:00
Jacques Pienaar a085700311 Make dumping using generic form more robust when IR ill-formed
PiperOrigin-RevId: 266198057
2019-08-29 12:14:30 -07:00
Feng Liu 6de6c2c138 Add tests to verify 0.0 is quantized correctly
We should consider both signed and narrow_range cases.

PiperOrigin-RevId: 266167366
2019-08-29 10:09:22 -07:00
Uday Bondhugula 4bb6f8ecdb Extend map canonicalization to propagate constant operands
- extend canonicalizeMapAndOperands to propagate constant operands into
  the map's expressions (and thus drop those operands).
- canonicalizeMapAndOperands previously only dropped duplicate and
  unused operands; however, operands that were constants were
  retained.

This change makes IR maps/expressions generated by various
utilities/passes even simpler; also makes some of the test checks more
accurate and simpler -- for eg., 0' instead of symbol(%{{.*}}).

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#107

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/107 from bondhugula:canonicalize-maps c889a51486d14fbf7db489f224f881e7e1ff7d72
PiperOrigin-RevId: 266085289
2019-08-29 01:13:29 -07:00
Uday Bondhugula bc2a543225 fix loop unroll and jam - operand mapping - imperfect nest case
- fix operand mapping while cloning sub-blocks to jam - was incorrect
  for imperfect nests where def/use was across sub-blocks
- strengthen/generalize the first test case to cover the previously
  missed scenario
- clean up the other cases while on this.

Previously, unroll-jamming the following nest
```
    affine.for %arg0 = 0 to 2048 {
      %0 = alloc() : memref<512x10xf32>
      affine.for %arg1 = 0 to 10 {
        %1 = affine.load %0[%arg0, %arg1] : memref<512x10xf32>
      }
      dealloc %0 : memref<512x10xf32>
    }
```

would yield

```
      %0 = alloc() : memref<512x10xf32>
      %1 = affine.apply #map0(%arg0)
      %2 = alloc() : memref<512x10xf32>
      affine.for %arg1 = 0 to 10 {
        %4 = affine.load %0[%arg0, %arg1] : memref<512x10xf32>
        %5 = affine.apply #map0(%arg0)
        %6 = affine.load %0[%5, %arg1] : memref<512x10xf32>
      }
      dealloc %0 : memref<512x10xf32>
      %3 = affine.apply #map0(%arg0)
      dealloc %0 : memref<512x10xf32>

```

instead of

```

module {
    affine.for %arg0 = 0 to 2048 step 2 {
      %0 = alloc() : memref<512x10xf32>
      %1 = affine.apply #map0(%arg0)
      %2 = alloc() : memref<512x10xf32>
      affine.for %arg1 = 0 to 10 {
        %4 = affine.load %0[%arg0, %arg1] : memref<512x10xf32>
        %5 = affine.apply #map0(%arg0)
        %6 = affine.load %2[%5, %arg1] : memref<512x10xf32>
      }
      dealloc %0 : memref<512x10xf32>
      %3 = affine.apply #map0(%arg0)
      dealloc %2 : memref<512x10xf32>
    }
```

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#98

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/98 from bondhugula:ujam ddbc853f69b5608b3e8ff9b5ac1f6a5a0bb315a4
PiperOrigin-RevId: 266073460
2019-08-28 23:42:50 -07:00
Stephan Herhut e90542c03b Add verification for dimension attribute on GPUDialect index operations.
PiperOrigin-RevId: 266073204
2019-08-28 23:39:57 -07:00
Feng Liu 7dd5efdf2c Fix the equality check of two floating point values
PiperOrigin-RevId: 266022088
2019-08-28 16:39:48 -07:00
River Riddle 29099e03ce Generalize the analysis manager framework to work on any operation at any nesting.
The pass manager is moving towards being able to run on operations at arbitrary nesting. An operation may have both parent and child operations, and the AnalysisManager must be able to handle this generalization. The AnalysisManager class now contains generic 'getCachedParentAnalysis' and 'getChildAnalysis/getCachedChildAnalysis' functions to query analyses on parent/child operations. This removes the hard coded nesting relationship between Module/Function.

PiperOrigin-RevId: 266003636
2019-08-28 15:11:17 -07:00
Eric Schweitz 2225411690 Tweak to the pretty type parser to recognize that `->` is a special token.
Tweak to the pretty type parser to recognize that `->` is a special token that
shouldn't be split into two characters.  This change allows dialect
types to wrap function types as in `!my.ptr_type<(i32) -> i32>`.

Closes tensorflow/mlir#105

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/105 from schweitzpgi:parse-arrow 8b2d768053f419daae5a1a864121a44c4319acbe
PiperOrigin-RevId: 265986240
2019-08-28 13:55:42 -07:00
Stephan Herhut c60c490356 Add implementation for tensor_load and tensor_store operations.
This change adds definitions, parsing and verification for both ops.

PiperOrigin-RevId: 265954051
2019-08-28 11:25:52 -07:00
Stephan Herhut 545c3e489f Port mlir-cuda-runner to use dialect conversion framework.
Instead of lowering the program in two steps (Standard->LLVM followed
by GPU->NVVM), leading to invalid IR inbetween, the runner now uses
one pattern based rewrite step to go directly from Standard+GPU to
LLVM+NVVM.

PiperOrigin-RevId: 265861934
2019-08-28 01:50:57 -07:00
Uday Bondhugula aa2cee9cf5 Refactor / improve replaceAllMemRefUsesWith
Refactor replaceAllMemRefUsesWith to split it into two methods: the new
method does the replacement on a single op, and is used by the existing
one.

- make the methods return LogicalResult instead of bool

- Earlier, when replacement failed (due to non-deferencing uses of the
  memref), the set of ops that had already been processed would have
  been replaced leaving the IR in an inconsistent state. Now, a
  pass is made over all ops to first check for non-deferencing
  uses, and then replacement is performed. No test cases were affected
  because all clients of this method were first checking for
  non-deferencing uses before calling this method (for other reasons).
  This isn't true for a use case in another upcoming PR (scalar
  replacement); clients can now bail out with consistent IR on failure
  of replaceAllMemRefUsesWith. Add test case.

- multiple deferencing uses of the same memref in a single op is
  possible (we have no such use cases/scenarios), and this has always
  remained unsupported. Add an assertion for this.

- minor fix to another test pipeline-data-transfer case.

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#87

PiperOrigin-RevId: 265808183
2019-08-27 17:56:56 -07:00
MLIR Team 696fcb7520 Add 3 additional intrinsic ops to NVVM dialect, in preparation to implement block-wide reduce.
PiperOrigin-RevId: 265720077
2019-08-27 10:56:18 -07:00
Lei Zhang 3af6b53381 [spirv] Fix the entry block to start with OpLabel
Each basic block in SPIR-V must start with an OpLabel instruction.
We don't support control flow yet, so this CL just makes sure that
the entry block follows this rule and is valid.

PiperOrigin-RevId: 265718841
2019-08-27 10:51:26 -07:00
Mahesh Ravishankar 4ced99c085 Enhance GPU To SPIR-V conversion to support builtins and load/store ops.
To support a conversion of a simple load-compute-store kernel from GPU
dialect to SPIR-V dialect, the conversion of operations like
"gpu.block_dim", "gpu.thread_id" which allow threads to get the launch
conversion is needed. In SPIR-V these are specified as global
variables with builin attributes. This CL adds support to specify
builtin variables in SPIR-V conversion framework. This is used to
convert the relevant operations from GPU dialect to SPIR-V dialect.
Also add support for conversion of load/store operation in Standard
dialect to SPIR-V dialect.
To simplify the conversion add a method to build a spv.AccessChain
operation that automatically determines the return type based on the
base pointer type and the indices provided.

PiperOrigin-RevId: 265718525
2019-08-27 10:50:23 -07:00
Denis Khalikov 8f2dfb51d4 [spirv] Add Block decoration for spv.struct.
Add Block decoration for top-level spv.struct.

Closes tensorflow/mlir#102

PiperOrigin-RevId: 265716241
2019-08-27 10:41:42 -07:00
River Riddle 2f59f76876 NFC: Remove the explicit context from Operation::create and OperationState.
The context can easily be recovered from the Location in these situations.

PiperOrigin-RevId: 265578574
2019-08-26 17:34:48 -07:00