Commit Graph

1409 Commits

Author SHA1 Message Date
Stephan Herhut abb626686d Extend kernel outlining to also consider dim worth inlining.
PiperOrigin-RevId: 281483447
2019-11-20 02:59:35 -08:00
Christian Sigg f868adafee Make type and rank explicit in mcuMemHostRegister function.
Fix registered size of indirect MemRefType kernel arguments.

PiperOrigin-RevId: 281362940
2019-11-19 13:13:02 -08:00
Nicolas Vasilache ee95f6f259 Add VectorOps.StridedSliceOp
The `vector.strided_slice` takes an n-D vector, k-D `offsets` integer array attribute, a
k-D `sizes` integer array attribute, a k-D `strides` integer array attribute and extracts
the n-D subvector at the proper offset.

Returns an n-D vector where the first k-D dimensions match the `sizes` attribute.
The returned subvector contains the elements starting at offset `offsets` and ending at
`offsets + sizes`.

Example:
```
  %1 = vector.strided_slice %0
      {offsets : [0, 2], sizes : [2, 4], strides : [1, 1]}:
    vector<4x8x16xf32> // returns a vector<2x4x16xf32>
```

This op will be useful for progressive lowering within the VectorOp dialect.

PiperOrigin-RevId: 281352749
2019-11-19 12:22:34 -08:00
Nicolas Vasilache 3732ba4def Fix pretty printer corner case in mlir_runner_utils.cpp.
In the particular case where the size of a memref dimension is 1, double printing would happen because printLast was called unconditionally.
This CL fixes the print and updates an incorrect test that should have caught this in the first place.

PiperOrigin-RevId: 281345142
2019-11-19 11:52:27 -08:00
Diego Caballero dd5a7cb488 Add getRemappedValue to ConversionPatternRewriter
This method is needed for N->1 conversion patterns to retrieve remapped
Values used in the original N operations.

Closes tensorflow/mlir#237

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/237 from dcaballe:dcaballe/getRemappedValue 1f64fadcf2b203f7b336ff0c5838b116ae3625db
PiperOrigin-RevId: 281321881
2019-11-19 11:09:39 -08:00
Alex Zinenko 8961d8e32f Change conversion CLI flag from -lower-to-llvm to -convert-std-to-llvm
The command-line flag name `lower-to-llvm` for the pass performing dialect
conversion from the Standard dialect to the LLVM dialect is misleading and
inconsistent with most of the conversion passses. It leads the user to believe
that there are no restrictions on what can be converted, while in fact only a
subset of the Standard dialect can be converted (with operations from other
dialects converted by separate passes). Use `convert-std-to-llvm` that better
reflects what the pass does and is consistent with most other conversions.

PiperOrigin-RevId: 281238797
2019-11-19 00:34:51 -08:00
Hanhan Wang c614c92fdc Support SPIR-V constant op to take DenseElementsAttr as input.
Iterates each element to build the array. This includes a little refactor to
combine bool/int/float into a function, since they are similar. The only
difference is calling different function in the end.

PiperOrigin-RevId: 281210288
2019-11-18 20:02:05 -08:00
Alexander Belyaev 8c6a5233d5 Lower linalg.indexed_generic to loops.
PiperOrigin-RevId: 281169885
2019-11-18 16:55:15 -08:00
Andy Davis a6a287335d Fix SubViewOp stride calculation in constant folding.
Adds unit tests for subview offset and stride argument constant folding.

PiperOrigin-RevId: 281161041
2019-11-18 15:01:08 -08:00
River Riddle 9873a29817 Add a parseAttribute<AttrType> overload for the non-type case.
The variant that accepts a type will check that the parsed attribute is a valid instance of AttrType. The non-type variant would silently fail in this case, leading to garbage attribute values.

PiperOrigin-RevId: 281136528
2019-11-18 13:11:36 -08:00
Denis Khalikov 6c77e59bfd [spirv] Add a canonicalizer for BitcastOp.
Convert chained `spirv::BitcastOp` operations into
one `spirv::BitcastOp` operation.

Closes tensorflow/mlir#238

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/238 from denis0x0D:sandbox/canon_bitcast 4352ed4f81b959ec92f849c599e733b62a99c010
PiperOrigin-RevId: 281129234
2019-11-18 12:37:00 -08:00
Andy Davis 68a8da4a93 Fix Affine Loop Fusion test case reported on github.
This CL utilizies the more robust fusion feasibility analysis being built out in LoopFusionUtils, which will eventually be used to replace the current affine loop fusion pass.

PiperOrigin-RevId: 281112340
2019-11-18 11:20:37 -08:00
Stephan Herhut f0f3b71d67 Implement folding of pattern dim(subview(_)[...][s1, ..., sn][...], i) -> si.
PiperOrigin-RevId: 281042016
2019-11-18 04:31:33 -08:00
Alex Zinenko b8dc3fd812 Rename CLI flags -lower-gpu-ops-to-*-ops to -convert-gpu-to-*
This makes the flags consistent with the naming scheme used elsewhere in the
codebase for dialect conversions.

PiperOrigin-RevId: 281027517
2019-11-18 02:43:10 -08:00
Denis Khalikov 68e48ba111 [spirv] Add bit ops
This CL added op definitions for a few bit operations:

* OpBitFieldInsert
* OpBitFieldSExtract
* OpBitFieldUExtract

Closes tensorflow/mlir#233

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/233 from denis0x0D:sandbox/bit_field_ops e7fd85b00d72d483d7992dc42b9cc4d673903455
PiperOrigin-RevId: 280691816
2019-11-15 11:03:19 -08:00
Lei Zhang a0986bf43d NFC: Convert CmpIPredicate in StandardOps to use EnumAttr
This turns several hand-written functions to auto-generated ones.

PiperOrigin-RevId: 280684326
2019-11-15 10:17:31 -08:00
Lei Zhang 88843ae37c Use aggregate-parameter builder for ops having autogen type-deduction builder
Thus far DRR always invokes the separate-parameter builder (i.e., requiring
a separate parameter for each result-type/operand/attribute) for creating
ops, no matter whether we can auto-generate a builder with type-deduction
ability or not.

This CL changes the path for ops that we can auto-generate type-deduction
builders, i.e., with SameOperandsAndResultType/FirstAttrDerivedResultType
traits. Now they are going through a aggregate-parameter builder (i.e.,
requiring one parameter for all result-types/operands/attributes).
attributes.)

It is expected this approach will be more friendly for future shape inference
function autogen and calling those autogen'd shape inference function without
excessive packing and repacking operand/attribute lists.
Also, it would enable better support for creating ops with optional attributes
because we are not required to provide an Attribute() as placeholder for
an optional attribute anymore.

PiperOrigin-RevId: 280654800
2019-11-15 07:33:54 -08:00
Stephan Herhut 57bafc674e Mark std.view as no-sideeffect.
The same reasoning as for std.subview applies.

PiperOrigin-RevId: 280639308
2019-11-15 05:28:31 -08:00
Stephan Herhut 9c7bceb4fe Mark std.subview as no-sideeffect.
In essence, std.subview is just an abstract indexing transformation (somewhat
akin to a gep in llvm) and by itself has no effect. From a practical perspective
this helps, as it allows to remove dead subview operations.

PiperOrigin-RevId: 280630046
2019-11-15 04:00:31 -08:00
Nicolas Vasilache 0b271b7dfe Refactor the LowerVectorTransfers pass to use the RewritePattern infra - NFC
This is step 1/n in refactoring infrastructure along the Vector dialect to make it ready for retargetability and composable progressive lowering.

PiperOrigin-RevId: 280529784
2019-11-14 15:40:07 -08:00
Andy Davis a4669cd3b4 Adds canonicalizer to SubViewOp which folds constants from base memref and operands into the subview result memref type.
Changes SubViewOp to support zero operands case, when offset, strides and sizes are all constant.

PiperOrigin-RevId: 280485075
2019-11-14 12:23:04 -08:00
Lei Zhang 796ca609eb [ODS] Fix operation argument population to avoid crash
The `Operator` class keeps an `arguments` field, which contains pointers
to `operands` and `attributes` elements. Thus it must be populated after
`operands` and `attributes` are finalized so to have stable pointers.
SmallVector may re-allocate when still having new elements added, which
will invalidate pointers.

PiperOrigin-RevId: 280466896
2019-11-14 11:03:29 -08:00
Alex Zinenko bf5916e7a4 Use MemRefDescriptor in Vector-to-LLVM convresion
Following up on the consolidation of MemRef descriptor conversion, update
Vector-to-LLVM conversion to use the helper class that abstracts away the
implementation details of the MemRef descriptor. This also makes the types of
the attributes in emitted llvm.insert/extractelement operations consistently
i64 instead of a mix of index and i64.

PiperOrigin-RevId: 280441451
2019-11-14 09:05:42 -08:00
Nicolas Vasilache f2b6ae9991 Move VectorOps to Tablegen - (almost) NFC
This CL moves VectorOps to Tablegen and cleans up the implementation.

This is almost NFC but 2 changes occur:
  1. an interface change occurs in the padding value specification in vector_transfer_read:
     the value becomes non-optional. As a shortcut we currently use %f0 for all paddings.
     This should become an OpInterface for vectorization in the future.
  2. the return type of vector.type_cast is trivial and simplified to `memref<vector<...>>`

Relevant roundtrip and invalid tests that used to sit in core are moved to the vector dialect.

The op documentation is moved to the .td file.

PiperOrigin-RevId: 280430869
2019-11-14 08:15:23 -08:00
Jacques Pienaar d1c99e10d0 Do not emit aliases when printing local form
Expand local scope printing to skip printing aliases as aliases are printed out at the top of a module and may not be part of the output generated by local scope print.

PiperOrigin-RevId: 280278617
2019-11-13 14:21:49 -08:00
Nicolas Vasilache 0bd6390b54 Deprecate linalg.subview in favor of std.subview
This CL uses the now standard std.subview in linalg.
Two shortcuts are currently taken to allow this port:
1. the type resulting from a view is currently degraded to fully dynamic to pass the SubViewOp verifier.
2. indexing into SubViewOp may access out of bounds since lowering to LLVM does not currently enforce it by construction.

These will be fixed in subsequent commits after discussions.

PiperOrigin-RevId: 280250129
2019-11-13 12:10:09 -08:00
Sean Silva 486f2122cd Add FuncOp::eraseArgument
This is a quite complex operation that users are likely to attempt to write
themselves and get wrong (citation: users=me).

Ideally, we could pull this into FunctionLike, but for now, the
FunctionType rewriting makes it FuncOp specific. We would need some hook
for rewriting the function type (which for LLVM's func op, would need to
rewrite the underlying LLVM type).

PiperOrigin-RevId: 280234164
2019-11-13 10:59:55 -08:00
River Riddle d985c74883 NFC: Refactor block signature conversion to not erase the original arguments.
This refactors the implementation of block signature(type) conversion to not insert fake cast operations to perform the type conversion, but to instead create a new block containing the proper signature. This has the benefit of enabling the use of pre-computed analyses that rely on mapping values. It also leads to a much cleaner implementation overall. The major user facing change is that applySignatureConversion will now replace the entry block of the region, meaning that blocks generally shouldn't be cached over calls to applySignatureConversion.

PiperOrigin-RevId: 280226936
2019-11-13 10:27:53 -08:00
River Riddle 6df8369941 Rename the current parseSymbolName to parseOptionalSymbolName
The current implementation silently fails if the '@' identifier isn't present, making it similar to the 'optional' parse methods. This change renames the current implementation to 'Optional' and adds a new 'parseSymbolName' that emits an error.

PiperOrigin-RevId: 280214610
2019-11-13 09:32:20 -08:00
Hanhan Wang 85d7fb3324 Make VariableOp instructions be in the first block in the function.
Since VariableOp is serialized during processBlock, we add two more fields,
`functionHeader` and `functionBody`, to collect instructions for a function.
After all the blocks have been processed, we append them to the `functions`.

Also, fix a bug in processGlobalVariableOp. The global variables should be
encoded into `typesGlobalValues`.

PiperOrigin-RevId: 280105366
2019-11-12 18:59:15 -08:00
Mahesh Ravishankar 2be53603e9 Add operations needed to support lowering of AffineExpr to SPIR-V.
Lowering of CmpIOp, DivISOp, RemISOp, SubIOp and SelectOp to SPIR-V
dialect enables the lowering of operations generated by AffineExpr ->
StandardOps conversion into the SPIR-V dialect.

PiperOrigin-RevId: 280039204
2019-11-12 13:20:06 -08:00
Lei Zhang b259c26eb0 Add support for OpPhi in loop header block
During deserialization, the loop header block will be moved into the
spv.loop's region. If the loop header block has block arguments,
we need to make sure it is correctly carried over to the block where
the new spv.loop resides.

During serialization, we need to make sure block arguments from the
spv.loop's entry block are not silently dropped.

PiperOrigin-RevId: 280021777
2019-11-12 12:00:28 -08:00
River Riddle 626e1fd95e Add an option to print an operation if a diagnostic is emitted on it
It is often helpful to inspect the operation that the error/warning/remark/etc. originated from, especially in the context of debugging or in the case of a verifier failure. This change adds an option 'mlir-print-op-on-diagnostic' that attaches the operation as a note to any diagnostic that is emitted on it via Operation::emit(Error|Warning|Remark). In the case of an error, the operation is printed in the generic form.

PiperOrigin-RevId: 280021438
2019-11-12 11:59:19 -08:00
Mahesh Ravishankar 104af84f4c Add Conversion to lower loop::ForOp to spirv::LoopOp.
loop::ForOp can be lowered to the structured control flow represented
by spirv::LoopOp by making the continue block of the spirv::LoopOp the
loop latch and the merge block the exit block. The resulting
spirv::LoopOp has a single back edge from the continue to header
block, and a single exit from header to merge.
PiperOrigin-RevId: 280015614
2019-11-12 11:33:27 -08:00
Nicolas Vasilache 51de3f688e Add LLVM lowering of std.subview
A followup CL will replace usage of linalg.subview by std.subview.

PiperOrigin-RevId: 279961981
2019-11-12 07:23:18 -08:00
Andy Davis 82d2c43eca Adds affine.min operation which returns the minimum value from a multi-result affine map. This operation is useful for things like computing the dynamic value of affine loop bounds, and is trivial to constant fold.
PiperOrigin-RevId: 279959714
2019-11-12 07:08:49 -08:00
Nicolas Vasilache f51a155337 Add support for alignment attribute in std.alloc.
This CL adds an extra pointer to the memref descriptor to allow specifying alignment.

In a previous implementation, we used 2 types: `linalg.buffer` and `view` where the buffer type was the unit of allocation/deallocation/alignment and `view` was the unit of indexing.

After multiple discussions it was decided to use a single type, which conflates both, so the memref descriptor now needs to carry both pointers.

This is consistent with the [RFC-Proposed Changes to MemRef and Tensor MLIR Types](https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ).

PiperOrigin-RevId: 279959463
2019-11-12 07:06:54 -08:00
River Riddle 9b9c647cef Add support for nested symbol references.
This change allows for adding additional nested references to a SymbolRefAttr to allow for further resolving a symbol if that symbol also defines a SymbolTable. If a referenced symbol also defines a symbol table, a nested reference can be used to refer to a symbol within that table. Nested references are printed after the main reference in the following form:

  symbol-ref-attribute ::= symbol-ref-id (`::` symbol-ref-id)*

Example:

  module @reference {
    func @nested_reference()
  }

  my_reference_op @reference::@nested_reference

Given that SymbolRefAttr is now more general, the existing functionality centered around a single reference is moved to a derived class FlatSymbolRefAttr. Followup commits will add support to lookups, rauw, etc. for scoped references.

PiperOrigin-RevId: 279860501
2019-11-11 18:18:31 -08:00
Andy Davis 5cf6e0ce7f Adds std.subview operation which takes dynamic offsets, sizes and strides and returns a memref type which represents sub/reduced-size view of its memref argument.
This operation is a companion operation to the std.view operation added as proposed in "Updates to the MLIR MemRefType" RFC.

PiperOrigin-RevId: 279766410
2019-11-11 10:33:27 -08:00
Stephan Herhut e04d4bf865 Also consider index constants when folding integer arithmetics with constants.
PiperOrigin-RevId: 279698088
2019-11-11 02:34:21 -08:00
MLIR Team 9fbf52e330 Look for SymbolRefAttr in KernelOutlining instead of hard-coding CallOp
This code should be exercised using the existing kernel outlining unit test, but
let me know if I should add a dedicated unit test using a fake call instruction
as well.

PiperOrigin-RevId: 279436321
2019-11-08 19:13:13 -08:00
Denis Khalikov 4697d657b7 [spirv] Add bit ops
This CL added op definitions for a few bit operations:

* OpShiftLeftLogical
* OpShiftRightArithmetic
* OpShiftRightLogical
* OpBitCount
* OpBitReverse
* OpNot

Also moved the definition of spv.BitwiseAnd to follow the
lexicographical order.

Closes tensorflow/mlir#215

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/215 from denis0x0D:sandbox/bit_ops d9b0852b689ac6c4879a9740b1740a2357f44d24
PiperOrigin-RevId: 279350470
2019-11-08 11:17:05 -08:00
Alex Zinenko 09e8e7107a mlir-translate: support -verify-diagnostics
MLIR translation tools can emit diagnostics and we want to be able to check if
it is indeed the case in tests. Reuse the source manager error handlers
provided for mlir-opt to support the verification in mlir-translate. This
requires us to change the signature of the functions that are registered to
translate sources to MLIR: it now takes a source manager instead of a memory
buffer.

PiperOrigin-RevId: 279132972
2019-11-07 11:42:46 -08:00
Uday Bondhugula eb47d5ee66 Fix asm printer for affine expr
- fixes tensorflow/mlir#201

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#204

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/204 from bondhugula:printfix 3f8a5b65391f45598258b2735fecaa409fbde848
PiperOrigin-RevId: 279115720
2019-11-07 10:27:27 -08:00
Andy Davis 8f00b4494d Swap operand order in std.view operation so that offset appears before dynamic sizes in the operand list.
PiperOrigin-RevId: 279114236
2019-11-07 10:20:23 -08:00
River Riddle 6b4e30b7c8 Add Ch-7 of the toy tutorial detailing how to define new types.
This chapter adds a new composite type to Toy, and shows the process of adding a new type to the IR, adding and updating operations to use it, and constant folding operations producing it.

PiperOrigin-RevId: 279107885
2019-11-07 09:54:04 -08:00
Andy Davis 5fbdb67b0a Add canonicalizer for ViewOp which folds constants into the ViewOp memref shape and layout map strides and offset.
PiperOrigin-RevId: 279088023
2019-11-07 08:05:03 -08:00
Jacques Pienaar 7af61f6bcd Add compatible query method to infer type interface
A return type that differs from the inferred return type need not indicate that an operation is invalid (e.g., tensor<*xf32> vs tensor<10xf32>) but they should be compatible for the operation to be considered valid. Add method to query if inferred type is compatible with return type.

Also add InferTypeOpIntefaceDefault trait that considers equality and compatibility as the same. Currently an op has to opt in to using it explicitly.

PiperOrigin-RevId: 279085639
2019-11-07 07:51:45 -08:00
Nicolas Vasilache 72040bf7c8 Update Linalg to use std.view
Now that a view op has graduated to the std dialect, we can update Linalg to use it and remove ops that have become obsolete. As a byproduct, the linalg buffer and associated ops can also disappear.

PiperOrigin-RevId: 279073591
2019-11-07 06:33:10 -08:00
Alexander Belyaev eee9cbdeb7 Add IndexedGenericOp to Linalg.
PiperOrigin-RevId: 279013404
2019-11-06 22:36:25 -08:00
Nicolas Vasilache ffebc8ce1d Drop spurious test file
PiperOrigin-RevId: 278959717
2019-11-06 16:00:57 -08:00
Nicolas Vasilache 7f6c6084b5 Add lowering of std.view to LLVM
This CL ports the lowering of linalg.view to the newly introduced std.view.
Differences in implementation relate to std.view having slightly different semantics:
1. a static or dynamic offset can be specified.
2. the size of the (contiguous) shape is passed instead of a range.
3. static size and stride information is extracted from the memref type rather than the range.

Besides these differences, lowering behaves the same.
A future CL will update Linalg to use this unified infrastructure.

PiperOrigin-RevId: 278948853
2019-11-06 15:06:16 -08:00
Andy Davis b5654d1311 Add ViewOp verification for dynamic strides, and address some comments from previous change.
PiperOrigin-RevId: 278903187
2019-11-06 11:25:54 -08:00
Andy Davis c38dca7f4b Add ViewOp to the StandardOps dialect, which casts a 1D/i8 element type memref type to an N-D memref type.
Proposed in RFC: https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ

Supports creating the N-D memref type with dynamic sizes and at a dynamic offset within the 1D base memref.
This change contains op definition/parsing/printing and tests. Follow up changes will handle constant shape/layout map folding and llvm lowering.

PiperOrigin-RevId: 278869990
2019-11-06 08:54:12 -08:00
Eric Schweitz 0d545921ea Add support for the LLVM FNeg instruction
Closes tensorflow/mlir#216

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/216 from schweitzpgi:llvmir-fneg-op f9b5f185845d671b745ab6fc213d5d9aff044b34
PiperOrigin-RevId: 278795325
2019-11-06 00:02:10 -08:00
James Molloy 250a11ae0f [llvm] Allow GlobalOp to take a region for complex initializers
This allows GlobalOp to either take a value attribute (for simple constants) or a region that can
contain IR instructions (that must be constant-foldable) to create a ConstantExpr initializer.

Example:
  // A complex initializer is constructed with an initializer region.
  llvm.mlir.global constant @int_gep() : !llvm<"i32*"> {
    %0 = llvm.mlir.addressof @g2 : !llvm<"i32*">
    %1 = llvm.mlir.constant(2 : i32) : !llvm.i32
    %2 = llvm.getelementptr %0[%1] : (!llvm<"i32*">, !llvm.i32) -> !llvm<"i32*">
    llvm.return %2 : !llvm<"i32*">
  }
PiperOrigin-RevId: 278717836
2019-11-05 15:11:01 -08:00
James Molloy 6b534ecbcb [llvm] Add initial import of LLVM modules to mlir-translate
This adds an importer from LLVM IR or bitcode to the LLVM dialect. The importer is registered with mlir-translate.

Known issues exposed by this patch but not yet fixed:
  * Globals' initializers are attributes, which makes it impossible to represent a ConstantExpr. This will be fixed in a followup.
  * icmp returns i32 rather than i1.
  * select and a couple of other instructions aren't implemented.
  * llvm.cond_br takes its successors in a weird order.

The testing here is known to be non-exhaustive.

I'd appreciate feedback on where this functionality should live. It looks like the translator *from MLIR to LLVM* lives in Target/, but the SPIR-V deserializer lives in Dialect/ which is why I've put this here too.

PiperOrigin-RevId: 278711683
2019-11-05 14:41:38 -08:00
River Riddle 2366561a39 Add a PatternRewriter hook to merge blocks, and use it to support for folding branches.
A pattern rewriter hook, mergeBlock, is added that allows for merging the operations of one block into the end of another. This is used to support a canonicalization pattern for branch operations that folds the branch when the successor has a single predecessor(the branch block).

Example:
  ^bb0:
    %c0_i32 = constant 0 : i32
    br ^bb1(%c0_i32 : i32)
  ^bb1(%x : i32):
    return %x : i32

becomes:
  ^bb0:
    %c0_i32 = constant 0 : i32
    return %c0_i32 : i32
PiperOrigin-RevId: 278677825
2019-11-05 11:57:38 -08:00
MLIR Team 1f43d0d000 [NVVM] Add mma.sync operation.
PiperOrigin-RevId: 278440547
2019-11-04 12:36:37 -08:00
River Riddle e4a912eb5a Update the SPV dialect type parser to use the methods on DialectAsmParser directly.
This simplifies the implementation quite a bit, and removes the need for explicit string munging. One change is made to some of the enum elements of SPV_DimAttr to ensure that they are proper identifiers; The string form is now prefixed with 'Dim'.

PiperOrigin-RevId: 278027132
2019-11-01 16:55:25 -07:00
River Riddle 68cfc89a0d Refactor LinalgDialect::parseType to use the DialectAsmParser methods directly.
This simplifies the implementation, and removes the need to do explicit string manipulation. A utility method 'parseDimensionList' is added to the DialectAsmParser to simplify defining types and attributes that contain shapes.

PiperOrigin-RevId: 278020604
2019-11-01 16:14:10 -07:00
River Riddle e94a8bfca8 Refactor QuantOps TypeParser to use the DialectAsmParser methods directly.
This greatly simplifies the implementation and removes custom parser functionality. The necessary methods are added to the DialectAsmParser.

PiperOrigin-RevId: 278015983
2019-11-01 15:47:03 -07:00
Lei Zhang f143fbfa77 Add ReferToOp attribute constraint for SymbolRefAttr
This constraint can be used to limit a SymbolRefAttr to point
to a specific kind of op in the closest parent with a symbol table.

PiperOrigin-RevId: 278001364
2019-11-01 14:26:36 -07:00
Nicolas Vasilache e20a2aa9f2 Delete spurious file
PiperOrigin-RevId: 277967079
2019-11-01 11:28:15 -07:00
Mahesh Ravishankar 9cbbd8f4df Support lowering of imperfectly nested loops into GPU dialect.
The current lowering of loops to GPU only supports lowering of loop
nests where the loops mapped to workgroups and workitems are perfectly
nested. Here a new lowering is added to handle lowering of imperfectly
nested loop body with the following properties
1) The loops partitioned to workgroups are perfectly nested.
2) The loop body of the inner most loop partitioned to workgroups can
contain one or more loop nests that are to be partitioned across
workitems. Each individual loops nests partitioned to workitems should
also be perfectly nested.
3) The number of workgroups and workitems are not deduced from the
loop bounds but are passed in by the caller of the lowering as values.
4) For statements within the perfectly nested loop nest partitioned
across workgroups that are not loops, it is valid to have all threads
execute that statement. This is NOT verified.

PiperOrigin-RevId: 277958868
2019-11-01 10:52:06 -07:00
Nicolas Vasilache bd94a10c02 Add Linalg pattern for producer-consumer fusion
This CL adds a simple pattern for specifying producer-consumer fusion on Linalg operations.

Implementing such an extension reveals some interesting properties.
Since Linalg operates on a buffer abstraction, the output buffers are specified as in/out parameters to the ops. As a consequence, there are no SSA use-def chains and one cannot specify complex dag input patterns with the current infrastructure.

Instead this CL uses constraints based on the existing linalg dependence analysis to focus the pattern and refine patterns based on the type of op that last wrote in a buffer.

This is a very local property and is less powerful than the generic dag specification based on SSA use-def chains.

This will be generalized in the future.

PiperOrigin-RevId: 277931503
2019-11-01 08:30:38 -07:00
James Molloy 96531e2f87 [mlir][llvm] Add missing cast ops
Also adds a builder method for fcmp, identical to that for icmp.

PiperOrigin-RevId: 277923158
2019-11-01 07:32:09 -07:00
Lei Zhang 7432234f3c NFC: Use #ifndef in various .td files instead of #ifdef and #else
Upstream LLVM gained support for #ifndef with https://reviews.llvm.org/D61888

This is changed mechanically via the following command:

find . -name "*.td" -exec sed -i -e ':a' -e 'N' -e '$!ba' -e 's/#ifdef \([A-Z_]*\)\n#else/#ifndef \1/g' {} \;

PiperOrigin-RevId: 277789427
2019-10-31 13:29:50 -07:00
Mehdi Amini ce9477934a Add a test for lowering GPU ops that cover cases where the symbol table isn't held by a ModuleOp (NFC)
PiperOrigin-RevId: 277752004
2019-10-31 10:35:15 -07:00
Mehdi Amini 07b4ce7409 Add a test.symbol_scope operation that has the SymbolTable Traits to the Test dialect
PiperOrigin-RevId: 277741687
2019-10-31 09:49:42 -07:00
Denis Khalikov d423d4a338 [spirv] Add cast operations
This CL added op definitions for a few cast operations:

* OpConvertFToU
* OpConvertFToS
* OpConvertSToF
* OpConvertUToF
* OpUConvert
* OpSConvert
* OpFConvert

Also moved the definition of spv.Bitcast to the new file.

Closes tensorflow/mlir#208 and tensorflow/mlir#174

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/208 from denis0x0D:sandbox/cast_ops 79bc9b37398aafddee6cf6beb301807988fe67f9
PiperOrigin-RevId: 277587891
2019-10-30 14:53:04 -07:00
Lei Zhang d024b68e6b Use `not` to invert return code in expected to fail tests
Windows does not like the RUN command of `(... || true) | ...`.

PiperOrigin-RevId: 277587031
2019-10-30 14:38:18 -07:00
River Riddle a32f0dcb5d Add support to GreedyPatternRewriter for erasing unreachable blocks.
Rewrite patterns may make modifications to the CFG, including dropping edges between blocks. This change adds a simple unreachable block elimination run at the end of each iteration to ensure that the CFG remains valid.

PiperOrigin-RevId: 277545805
2019-10-30 11:19:24 -07:00
Lei Zhang cb40e36d3b Fix segfault when no symbol is given to an constraint operand
This fixed the segfault when we see the following pattern:
  Pat<(...), (...), [(... 1, 2, 3), ...]>

PiperOrigin-RevId: 277544300
2019-10-30 11:12:57 -07:00
Nicolas Vasilache 05a5a41416 Add basic support for declarative Linalg transformations
Linalg ops provide a good anchor for pattern matching/rewriting transformations.
This CL adds a simple example of how multi-level tiling may be specified by attaching a simple StringAttr to ops as they are transformed so we can easily specify partial lowering to control transformation application.

This is a first stab at taking advantage of higher-level information contained in Linalg ops and will evolve in the future.

PiperOrigin-RevId: 277497958
2019-10-30 07:12:33 -07:00
Lei Zhang 80213ba5f0 [spirv] Fix gen_spirv_dialect.py and add spv.Unreachable
This CL fixed gen_spirv_dialect.py to support nested delimiters when
chunking existing ODS entries in .td files and to allow ops without
correspondence in the spec. This is needed to pull in the definition
of OpUnreachable.

PiperOrigin-RevId: 277486465
2019-10-30 05:41:18 -07:00
Lei Zhang ca2538e9a7 [spirv] Support OpPhi using block arguments
This CL adds another control flow instruction in SPIR-V: OpPhi.
It is modelled as block arguments to be idiomatic with MLIR.
See the rationale.md doc for "Block Arguments vs PHI nodes".
Serialization and deserialization is updated to convert between
block arguments and SPIR-V OpPhi instructions.

PiperOrigin-RevId: 277161545
2019-10-28 15:58:42 -07:00
Sean Silva 66ec24d833 Parse locations in parseGenericOperation
For ops that recursively re-enter the parser to parse an operation (such as
ops with a "wraps" pretty form), this ensures that the wrapped op will parse
its location, which can then be used for the locations of the wrapping op
and any other implicit ops.

PiperOrigin-RevId: 277152636
2019-10-28 15:11:26 -07:00
River Riddle 2f4d0c085a Add support for marking an operation as recursively legal.
In some cases, it may be desirable to mark entire regions of operations as legal. This provides an additional granularity of context to the concept of "legal". The `ConversionTarget` supports marking operations, that were previously added as `Legal` or `Dynamic`, as `recursively` legal. Recursive legality means that if an operation instance is legal, either statically or dynamically, all of the operations nested within are also considered legal. An operation can be marked via `markOpRecursivelyLegal<>`:

```c++
ConversionTarget &target = ...;

/// The operation must first be marked as `Legal` or `Dynamic`.
target.addLegalOp<MyOp>(...);
target.addDynamicallyLegalOp<MySecondOp>(...);

/// Mark the operation as always recursively legal.
target.markOpRecursivelyLegal<MyOp>();
/// Mark optionally with a callback to allow selective marking.
target.markOpRecursivelyLegal<MyOp, MySecondOp>([](Operation *op) { ... });
/// Mark optionally with a callback to allow selective marking.
target.markOpRecursivelyLegal<MyOp>([](MyOp op) { ... });
```

PiperOrigin-RevId: 277086382
2019-10-28 10:04:34 -07:00
Alexander Belyaev 780a108d31 Fix include guards and add tests for OpToFuncCallLowering.
PiperOrigin-RevId: 276859463
2019-10-26 08:21:36 -07:00
Smit Hinsu cde337cfde Define AnyRankedTensor Type in TableGen
PiperOrigin-RevId: 276714649
2019-10-25 10:31:56 -07:00
River Riddle b69e8ee049 Add support for parsing multiple result name groups.
This allows for parsing things like:

%name_1, %name_2:5, %name_3:2 = "my.op" ...

This is useful for operations that have groups of variadic result values. The
total number of results is expected to match the number of results defined by
the operation.

PiperOrigin-RevId: 276703280
2019-10-25 09:34:02 -07:00
Denis Khalikov dd2e444325 [spirv] AccessChainOp canonicalization.
Combine chained `spirv::AccessChainOp` operations into one
`spirv::AccessChainOp` operation.

Closes tensorflow/mlir#198

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/198 from denis0x0D:sandbox/canon_access_chain 0cb87955a85511071143d62637ff939d0dabc2bd
PiperOrigin-RevId: 276609345
2019-10-24 18:41:34 -07:00
River Riddle 2b61b7979e Convert the Canonicalize and CSE passes to generic Operation Passes.
This allows for them to be used on other non-function, or even other function-like, operations. The algorithms are already generic, so this is simply changing the derived pass type. The majority of this change is just ensuring that the nesting of these passes remains the same, as the pass manager won't auto-nest them anymore.

PiperOrigin-RevId: 276573038
2019-10-24 15:01:09 -07:00
River Riddle ef43b56538 Add support for replacing all uses of a symbol.
This requires reconstructing the attribute dictionary of each operation containing a use.

PiperOrigin-RevId: 276520544
2019-10-24 10:47:27 -07:00
River Riddle 21ee4e987f Add @below and @above directives to verify-diagnostics.
This simplifies defining expected-* directives when there are multiple that apply to the next or previous line. @below applies the directive to the next non-designator line, i.e. the next line that does not contain an expected-* designator. @above applies to the previous non designator line.

Examples:

// Expect an error on the next line that does not contain a designator.
// expected-remark@below {{remark on function below}}
// expected-remark@below {{another remark on function below}}
func @bar(%a : f32)

// Expect an error on the previous line that does not contain a designator.
func @baz(%a : f32)
// expected-remark@above {{remark on function above}}
// expected-remark@above {{another remark on function above}}

PiperOrigin-RevId: 276369085
2019-10-23 15:56:29 -07:00
Uday Bondhugula ad6925f479 Update loop.for verifier message
fix: nonnegative -> positive

Closes tensorflow/mlir#206

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/206 from bondhugula:bondhugula-patch-1 9a47ca7dfd230180a9df33e9a64b33d02252d30a
PiperOrigin-RevId: 276060885
2019-10-22 07:34:56 -07:00
Lei Zhang 020f9eb68c [DRR] Allow interleaved operands and attributes
Previously DRR assumes attributes to appear after operands. This was the
previous requirements on ODS, but that has changed some time ago. Fix
DRR to also support interleaved operands and attributes.

PiperOrigin-RevId: 275983485
2019-10-21 20:48:17 -07:00
Lei Zhang d9fe892e42 [spirv] Allow block arguments on spv.Branch(Conditional)
We will use block arguments as the way to model SPIR-V OpPhi in
the SPIR-V dialect.

This CL also adds a few useful helper methods to both ops to
get the block arguments.

Also added tests for branch weight (de)serialization.

PiperOrigin-RevId: 275960797
2019-10-21 17:32:00 -07:00
Alex Zinenko 5f867d26b4 Use LLVM_Type instead of AnyType in the definition of LLVM_CallOp
The type constraint had to be relaxed due to the order of lowering passes in
the examples, that since has been fixed. The relaxed version was still used by
the CUDA lowering for launch sizes of `index` type. This is not necessary since
the GPU dialect does not restrict the type of the launch size operands. Use an
LLVM type instead and restore the check in the LLVM_CallOp definition.

PiperOrigin-RevId: 275920109
2019-10-21 14:12:19 -07:00
River Riddle 4514cdd5eb Cleanup and rewrite Ch-4.md.
This change rewrites Ch-4.md to introduced interfaces in a detailed step-by-step manner, adds examples, and fixes some errors.

PiperOrigin-RevId: 275887017
2019-10-21 11:32:39 -07:00
River Riddle 941a1c4332 NFC: Fix remaining usages of MulOp as matrix multiplication.
MulOp now represents an element-wise multiplication instead of a matrix multiplication.

PiperOrigin-RevId: 275886774
2019-10-21 11:31:32 -07:00
River Riddle 03d7be2aca NFC: Elide the value of a UnitAttr within nested attribute dictionaries.
This matches the behavior of the top level attribute dictionary.

PiperOrigin-RevId: 275879828
2019-10-21 11:02:07 -07:00
River Riddle 9ac459e871 Add a Symbol trait to simplify defining operations that represent symbols.
This trait provides accessors for the name, symbol use list methods, verification, with more to be added.

PiperOrigin-RevId: 275864554
2019-10-21 09:58:59 -07:00
River Riddle 1bdfc9e74d NFC: Fix typo : Retur -> Return
PiperOrigin-RevId: 275745931
2019-10-20 15:13:20 -07:00
Kazuaki Ishizaki f28c5aca17 Fix minor spelling tweaks (NFC)
Closes tensorflow/mlir#175

PiperOrigin-RevId: 275726876
2019-10-20 09:44:36 -07:00
Kazuaki Ishizaki 8bfedb3ca5 Fix minor spelling tweaks (NFC)
Closes tensorflow/mlir#177

PiperOrigin-RevId: 275692653
2019-10-20 00:11:34 -07:00
Christian Sigg c3e56cd12c Get active source lane predicate from shuffle instruction.
nvvm.shfl.sync.bfly optionally returns a predicate whether source lane was active. Support for this was added to clang in https://reviews.llvm.org/D68892.

Add an optional 'pred' unit attribute to the instruction to return this predicate. Specify this attribute in the partial warp reduction so we don't need to manually compute the predicate.

PiperOrigin-RevId: 275616564
2019-10-19 01:53:25 -07:00
Sean Silva 9c9a7e9268 Add support for function result attributes.
This allows dialect-specific attributes to be attached to func results. (or more specifically, FunctionLike ops).

For example:

```
func @f() -> (i32 {my_dialect.some_attr = 3})
```

This attaches my_dialect.some_attr with value 3 to the first result of func @f.

Another more complex example:

```
func @g() -> (i32, f32 {my_dialect.some_attr = "foo", other_dialect.some_other_attr = [1,2,3]}, i1)
```

Here, the second result has two attributes attached.

PiperOrigin-RevId: 275564165
2019-10-18 16:03:28 -07:00
Nicolas Vasilache 9e7e297da3 Lower vector transfer ops to loop.for operations.
This allows mixing linalg operations with vector transfer operations (with additional modifications to affine ops) and is a step towards solving tensorflow/mlir#189.

PiperOrigin-RevId: 275543361
2019-10-18 14:10:10 -07:00