Commit Graph

2512 Commits

Author SHA1 Message Date
Andy Ly b15e2aec75 Have ValueUseIterator template use OperandType instead of IROperand.
This was causing some issues using helper methods like llvm::make_early_inc_range on Value::getUses(), resulting in IROperand instead of OpOperand.

PiperOrigin-RevId: 262056425
2019-08-06 20:48:57 -07:00
River Riddle 8920afb0a6 NFC: Simplify ModuleTerminatorOp by using the HasParent trait.
PiperOrigin-RevId: 261962104
2019-08-06 11:46:32 -07:00
Andy Ly 55f2e24ab3 Remove ops in regions/blocks from worklist when parent op is being removed via GreedyPatternRewriteDriver::replaceOp.
This fixes a bug where ops inside the parent op are visited even though the parent op has been removed.

PiperOrigin-RevId: 261953580
2019-08-06 11:08:54 -07:00
River Riddle 641fc7007c NFC: Simplify ModuleOp by using the SingleBlockImplicitTerminator trait.
PiperOrigin-RevId: 261944712
2019-08-06 10:33:45 -07:00
Lei Zhang 60f78453d7 Emit matchAndRewrite() for declarative rewrite rules
Previously we are emitting separate match() and rewrite()
methods, which requires conveying a match state struct
in a unique_ptr across these two methods. Changing to
emit matchAndRewrite() simplifies the picture.

PiperOrigin-RevId: 261906804
2019-08-06 07:11:08 -07:00
Lei Zhang cd1c488ecd [spirv] Provide decorations in batch for op construction
Instead of setting the attributes for decorations one by one
after constructing the op, this CL changes to attach all
the attributes for decorations to the attribute vector for
constructing the op. This should be simpler and more
efficient.

PiperOrigin-RevId: 261905578
2019-08-06 07:02:59 -07:00
Nicolas Vasilache 4b422a51ed Add a region to linalg.generic
This CL extends the Linalg GenericOp with an alternative way of specifying the body of the computation based on a single block region. The "fun" attribute becomes optional.
Either a SymbolRef "fun" attribute or a single block region must be specified to describe the side-effect-free computation. Upon lowering to loops, the new region body is inlined in the innermost loop.

The parser, verifier and pretty printer are extended.
Appropriate roundtrip, negative and lowering to loop tests are added.

PiperOrigin-RevId: 261895568
2019-08-06 05:50:36 -07:00
Nicolas Vasilache 24647750d4 Refactor Linalg ops to loop lowering (NFC)
This CL modifies the LowerLinalgToLoopsPass to use RewritePattern.
This will make it easier to inline Linalg generic functions and regions when emitting to loops in a subsequent CL.

PiperOrigin-RevId: 261894120
2019-08-06 05:38:16 -07:00
Diego Caballero 68587dfc15 Add TTI pass initialization to pass managers.
Many LLVM transformations benefits from knowing the targets. This enables optimizations,
especially in a JIT context when the target is (generally) well-known.

Closes tensorflow/mlir#49

PiperOrigin-RevId: 261840617
2019-08-05 22:14:27 -07:00
River Riddle a0df3ebd15 NFC: Implement OwningRewritePatternList as a class instead of a using directive.
This allows for proper forward declaration, as opposed to leaking the internal implementation via a using directive. This also allows for all pattern building to go through 'insert' methods on the OwningRewritePatternList, replacing uses of 'push_back' and 'RewriteListBuilder'.

PiperOrigin-RevId: 261816316
2019-08-05 18:38:22 -07:00
Suharsh Sivakumar 3657966e83 Fix header guard.
PiperOrigin-RevId: 261774919
2019-08-05 14:50:03 -07:00
Nicolas Vasilache ceb8d2d20e Drop linalg.range_intersect op
This op is not useful.

PiperOrigin-RevId: 261665736
2019-08-05 05:26:20 -07:00
Lei Zhang 496a42f291 Use SingleBlockImplicitTerminator trait for spv.module
This trait provides the ensureTerminator() utility function and
the checks to make sure a spv.module is indeed terminated with
spv._module_end.

PiperOrigin-RevId: 261664153
2019-08-05 05:10:05 -07:00
Alex Zinenko 6059122601 Introduce custom syntax for llvm.func
Similar to all LLVM dialect operations, llvm.func needs to have the custom
syntax.  Use the generic FunctionLike printer and parser to implement it.

PiperOrigin-RevId: 261641755
2019-08-05 01:57:54 -07:00
Denis Khalikov b36e3be3fc [mlir-translate] Fix test suite.
llvm ir printer was changed at LLVM r367755.
Prints value numbers for unnamed functions argument.

Closes tensorflow/mlir#67

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/67 from denis0x0D:sandbox/fix_mlir_translate ae46844e66f34a02e0cf86782ddadc5bce58b30d
PiperOrigin-RevId: 261640048
2019-08-05 01:39:55 -07:00
Mehdi Amini d682877eb3 Remove non-needed includes from ConvertControlFlowToCFG.cpp (NFC)
The includes related to the LLVM dialect are not used in this file and
introduce an implicit dependencies between the two libraries which isn't
reflected in the CMakeLists.txt, causing non-deterministic build failures.

PiperOrigin-RevId: 261576935
2019-08-04 10:59:18 -07:00
Alex Zinenko d043f0025b Fix ExecutionEngine post-update in upstream LLVM
LLVM r367686 changed the locking scheme to avoid potential deadlocks and the
related llvm::orc::ThreadSafeModule APIs ExecutionEngine was relying upon,
breaking the MLIR build.  Update our use of ThreadSafeModule to unbreak the
build.

PiperOrigin-RevId: 261566571
2019-08-04 07:48:01 -07:00
Lei Zhang 9d7655677f [ODS] Add new definitions for non-negative integer attributes
This CL added a new NonNegativeIntAttrBase class and two instantiations,
one for I32 and the other for I64.

PiperOrigin-RevId: 261513292
2019-08-03 16:58:52 -07:00
Mehdi Amini 0c3923e1dc Fix clang 5.0 by using type aliases for LLVM DenseSet/Map
When inlining the declaration for llvm::DenseSet/DenseMap in the mlir
namespace from a forward declaration, clang does not take the default
for the template parameters if their are declared later.

namespace llvm {
  template<typename Foo>
  class DenseMap;
}
namespace mlir {
  using llvm::DenseMap;
}
namespace llvm {
  template<typename Foo = int>
  class DenseMap {};
}

namespace mlir {
  DenseMap<> map;
}

PiperOrigin-RevId: 261495612
2019-08-03 11:35:50 -07:00
Nicolas Vasilache 600c47e77b Add a generic Linalg op
This CL introduces a linalg.generic op to represent generic tensor contraction operations on views.

A linalg.generic operation requires a numbers of attributes that are sufficient to emit the computation in scalar form as well as compute the appropriate subviews to enable tiling and fusion.

These attributes are very similar to the attributes for existing operations such as linalg.matmul etc and existing operations can be implemented with the generic form.

In the future, most existing operations can be implemented using the generic form.

This CL starts by splitting out most of the functionality of the linalg::NInputsAndOutputs trait into a ViewTrait that queries the per-instance properties of the op. This allows using the attribute informations.

This exposes an ordering of verifiers issue where ViewTrait::verify uses attributes but the verifiers for those attributes have not been run. The desired behavior would be for the verifiers of the attributes specified in the builder to execute first but it is not the case atm. As a consequence, to emit proper error messages and avoid crashing, some of the
linalg.generic methods are defensive as such:
```
    unsigned getNumInputs() {
      // This is redundant with the `n_views` attribute verifier but ordering of verifiers
      // may exhibit cases where we crash instead of emitting an error message.
      if (!getAttr("n_views") || n_views().getValue().size() != 2)
        return 0;
```

In pretty-printed form, the specific attributes required for linalg.generic are factored out in an independent dictionary named "_". When parsing its content is flattened and the "_name" is dropped. This allows using aliasing for reducing boilerplate at each linalg.generic invocation while benefiting from the Tablegen'd verifier form for each named attribute in the dictionary.

For instance, implementing linalg.matmul in terms of linalg.generic resembles:

```
func @mac(%a: f32, %b: f32, %c: f32) -> f32 {
  %d = mulf %a, %b: f32
  %e = addf %c, %d: f32
  return %e: f32
}
#matmul_accesses = [
  (m, n, k) -> (m, k),
  (m, n, k) -> (k, n),
  (m, n, k) -> (m, n)
]
#matmul_trait = {
  doc = "C(m, n) += A(m, k) * B(k, n)",
  fun = @mac,
  indexing_maps = #matmul_accesses,
  library_call = "linalg_matmul",
  n_views = [2, 1],
  n_loop_types = [2, 1, 0]
}
```

And can be used in multiple places as:
```
  linalg.generic #matmul_trait %A, %B, %C [other-attributes] :
    !linalg.view<?x?xf32>, !linalg.view<?x?xf32>, !linalg.view<?x?xf32>
```

In the future it would be great to have a mechanism to alias / register a new
linalg.op as a pair of linalg.generic, #trait.

Also, note that with one could theoretically only specify the `doc` string and parse all the attributes from it.

PiperOrigin-RevId: 261338740
2019-08-02 09:53:41 -07:00
Jacques Pienaar 192039e8be Fully qualify DenseMap.
PiperOrigin-RevId: 261325481
2019-08-02 08:28:06 -07:00
Diego Caballero c19b72d3f3 Add StdIndexedValue to EDSC helpers
Add StdIndexedValue to EDSC helper so that we can use it
to generated std.load and std.store in EDSC.

Closes tensorflow/mlir#59

PiperOrigin-RevId: 261324965
2019-08-02 08:24:17 -07:00
Alex Zinenko 58e66d71e7 AffineDataCopyGeneration: don't use CL flag values inside the pass
AffineDataCopyGeneration pass relied on command line flags for internal logic
in several places, which makes it unusable in a library context (i.e. outside a
standalone mlir-opt binary that does the command line parsing).  Define
configuration flags in the constructor instead, and set them up to command
line-based defaults to maintain the original behavior.

PiperOrigin-RevId: 261322364
2019-08-02 08:04:30 -07:00
Alex Zinenko f579079f18 WritingAPass doc: demonstrate registration of a non-default-constructible pass
This functionality was added recently and is intended to ensure that parametric
passes can be configured programmatically and not only from command-line flags,
which are mostly useless outside of standalone mlir-opt biary.

PiperOrigin-RevId: 261320932
2019-08-02 07:54:53 -07:00
Mehdi Amini 1ddd20bc40 Add missing include to DenseMap in MLIRContext.cpp
This is fixing the build of MLIR on MacOS when built within TensorFlow

PiperOrigin-RevId: 261223250
2019-08-01 16:39:00 -07:00
Uday Bondhugula 18b8d4352b Introduce explicit copying optimization by generalizing the DMA generation pass
Explicit copying to contiguous buffers is a standard technique to avoid
conflict misses and TLB misses, and improve hardware prefetching
performance. When done in conjunction with cache tiling, it nearly
eliminates all cache conflict and TLB misses, and a single hardware
prefetch stream is needed per data tile.

- generalize/extend DMA generation pass (renamed data copying pass) to
  perform either point-wise explicit copies to fast memory buffers or
  DMAs (depending on a cmd line option). All logic is the same as
  erstwhile -dma-generate.

- -affine-dma-generate is now renamed -affine-data-copy; when -dma flag is
  provided, DMAs are generated, or else explicit copy loops are generated
  (point-wise) by default.

- point-wise copying could be used for CPUs (or GPUs); some indicative
  performance numbers with a "C" version of the MLIR when compiled with
  and without this optimization (about 2x improvement here).

  With a matmul on 4096^2 matrices on a single core of an Intel Core i7
  Skylake i7-8700K with clang 8.0.0:

  clang -O3:                       518s
  clang -O3 with MLIR tiling (128x128):      24.5s
  clang -O3 with MLIR tiling + data copying  12.4s
  (code equivalent to test/Transforms/data-copy.mlir func @matmul)

- fix some misleading comments.

- change default fast-mem space to 0 (more intuitive now with the
  default copy generation using point-wise copies instead of DMAs)

On a simple 3-d matmul loop nest, code generated with -affine-data-copy:

```
  affine.for %arg3 = 0 to 4096 step 128 {
    affine.for %arg4 = 0 to 4096 step 128 {
      %0 = affine.apply #map0(%arg3, %arg4)
      %1 = affine.apply #map1(%arg3, %arg4)
      %2 = alloc() : memref<128x128xf32, 2>
      // Copy-in Out matrix.
      affine.for %arg5 = 0 to 128 {
        %5 = affine.apply #map2(%arg3, %arg5)
        affine.for %arg6 = 0 to 128 {
          %6 = affine.apply #map2(%arg4, %arg6)
          %7 = load %arg2[%5, %6] : memref<4096x4096xf32>
          affine.store %7, %2[%arg5, %arg6] : memref<128x128xf32, 2>
        }
      }
      affine.for %arg5 = 0 to 4096 step 128 {
        %5 = affine.apply #map0(%arg3, %arg5)
        %6 = affine.apply #map1(%arg3, %arg5)
        %7 = alloc() : memref<128x128xf32, 2>
        // Copy-in LHS.
        affine.for %arg6 = 0 to 128 {
          %11 = affine.apply #map2(%arg3, %arg6)
          affine.for %arg7 = 0 to 128 {
            %12 = affine.apply #map2(%arg5, %arg7)
            %13 = load %arg0[%11, %12] : memref<4096x4096xf32>
            affine.store %13, %7[%arg6, %arg7] : memref<128x128xf32, 2>
          }
        }
        %8 = affine.apply #map0(%arg5, %arg4)
        %9 = affine.apply #map1(%arg5, %arg4)
        %10 = alloc() : memref<128x128xf32, 2>
        // Copy-in RHS.
        affine.for %arg6 = 0 to 128 {
          %11 = affine.apply #map2(%arg5, %arg6)
          affine.for %arg7 = 0 to 128 {
            %12 = affine.apply #map2(%arg4, %arg7)
            %13 = load %arg1[%11, %12] : memref<4096x4096xf32>
            affine.store %13, %10[%arg6, %arg7] : memref<128x128xf32, 2>
          }
        }
        // Compute.
        affine.for %arg6 = #map7(%arg3) to #map8(%arg3) {
          affine.for %arg7 = #map7(%arg4) to #map8(%arg4) {
            affine.for %arg8 = #map7(%arg5) to #map8(%arg5) {
              %11 = affine.load %7[-%arg3 + %arg6, -%arg5 + %arg8] : memref<128x128xf32, 2>
              %12 = affine.load %10[-%arg5 + %arg8, -%arg4 + %arg7] : memref<128x128xf32, 2>
              %13 = affine.load %2[-%arg3 + %arg6, -%arg4 + %arg7] : memref<128x128xf32, 2>
              %14 = mulf %11, %12 : f32
              %15 = addf %13, %14 : f32
              affine.store %15, %2[-%arg3 + %arg6, -%arg4 + %arg7] : memref<128x128xf32, 2>
            }
          }
        }
        dealloc %10 : memref<128x128xf32, 2>
        dealloc %7 : memref<128x128xf32, 2>
      }
      %3 = affine.apply #map0(%arg3, %arg4)
      %4 = affine.apply #map1(%arg3, %arg4)
      // Copy out result matrix.
      affine.for %arg5 = 0 to 128 {
        %5 = affine.apply #map2(%arg3, %arg5)
        affine.for %arg6 = 0 to 128 {
          %6 = affine.apply #map2(%arg4, %arg6)
          %7 = affine.load %2[%arg5, %arg6] : memref<128x128xf32, 2>
          store %7, %arg2[%5, %6] : memref<4096x4096xf32>
        }
      }
      dealloc %2 : memref<128x128xf32, 2>
    }
  }
```

With -affine-data-copy -dma:

```
  affine.for %arg3 = 0 to 4096 step 128 {
    %0 = affine.apply #map3(%arg3)
    %1 = alloc() : memref<128xf32, 2>
    %2 = alloc() : memref<1xi32>
    affine.dma_start %arg2[%arg3], %1[%c0], %2[%c0], %c128_0 : memref<4096xf32>, memref<128xf32, 2>, memref<1xi32>
    affine.dma_wait %2[%c0], %c128_0 : memref<1xi32>
    %3 = alloc() : memref<1xi32>
    affine.for %arg4 = 0 to 4096 step 128 {
      %5 = affine.apply #map0(%arg3, %arg4)
      %6 = affine.apply #map1(%arg3, %arg4)
      %7 = alloc() : memref<128x128xf32, 2>
      %8 = alloc() : memref<1xi32>
      affine.dma_start %arg0[%arg3, %arg4], %7[%c0, %c0], %8[%c0], %c16384, %c4096, %c128_2 : memref<4096x4096xf32>, memref<128x128xf32, 2>, memref<1xi32>
      affine.dma_wait %8[%c0], %c16384 : memref<1xi32>
      %9 = affine.apply #map3(%arg4)
      %10 = alloc() : memref<128xf32, 2>
      %11 = alloc() : memref<1xi32>
      affine.dma_start %arg1[%arg4], %10[%c0], %11[%c0], %c128_1 : memref<4096xf32>, memref<128xf32, 2>, memref<1xi32>
      affine.dma_wait %11[%c0], %c128_1 : memref<1xi32>
      affine.for %arg5 = #map3(%arg3) to #map5(%arg3) {
        affine.for %arg6 = #map3(%arg4) to #map5(%arg4) {
          %12 = affine.load %7[-%arg3 + %arg5, -%arg4 + %arg6] : memref<128x128xf32, 2>
          %13 = affine.load %10[-%arg4 + %arg6] : memref<128xf32, 2>
          %14 = affine.load %1[-%arg3 + %arg5] : memref<128xf32, 2>
          %15 = mulf %12, %13 : f32
          %16 = addf %14, %15 : f32
          affine.store %16, %1[-%arg3 + %arg5] : memref<128xf32, 2>
        }
      }
      dealloc %11 : memref<1xi32>
      dealloc %10 : memref<128xf32, 2>
      dealloc %8 : memref<1xi32>
      dealloc %7 : memref<128x128xf32, 2>
    }
    %4 = affine.apply #map3(%arg3)
    affine.dma_start %1[%c0], %arg2[%arg3], %3[%c0], %c128 : memref<128xf32, 2>, memref<4096xf32>, memref<1xi32>
    affine.dma_wait %3[%c0], %c128 : memref<1xi32>
    dealloc %3 : memref<1xi32>
    dealloc %2 : memref<1xi32>
    dealloc %1 : memref<128xf32, 2>
  }
```

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#50

PiperOrigin-RevId: 261221903
2019-08-01 16:31:58 -07:00
Lei Zhang 7768ea9fb3 Qualify StringRef to fix Windows build failure
PiperOrigin-RevId: 261195069
2019-08-01 14:14:31 -07:00
Lei Zhang 00a7b6706d [spirv] Add support for specialization constant
This CL extends the existing spv.constant op to also support
specialization constant by adding an extra unit attribute
on it.

PiperOrigin-RevId: 261194869
2019-08-01 14:13:37 -07:00
Eric Schweitz b5fd117b23 Add FIR, the Flang project's IR, to the dialect registry.
Closes tensorflow/mlir#62

PiperOrigin-RevId: 261187850
2019-08-01 13:39:57 -07:00
Denis Khalikov 08ae08cbee [spirv] Add binary logical operations.
Add binary logical operations regarding to the spec section 3.32.15:
OpIEqual, OpINotEqual, OpUGreaterThan, OpSGreaterThan,
OpUGreaterThanEqual, OpSGreaterThanEqual, OpULessThan, OpSLessThan,
OpULessThanEqual, OpSLessThanEqual.

Closes tensorflow/mlir#61

PiperOrigin-RevId: 261181281
2019-08-01 13:06:02 -07:00
Lei Zhang c72d849eb9 Replace the verifyUnusedValue directive with HasNoUseOf constraint
verifyUnusedValue is a bit strange given that it is specified in a
result pattern but used to generate match statements. Now we are
able to support multi-result ops better, we can retire it and replace
it with a HasNoUseOf constraint. This reduces the number of mechanisms.

PiperOrigin-RevId: 261166863
2019-08-01 11:51:15 -07:00
Lei Zhang 88b175eea5 Migrate pattern symbol binding tests to use TestDialect
PiperOrigin-RevId: 261045611
2019-07-31 19:29:07 -07:00
Lei Zhang e032d0dc63 Fix support for auxiliary ops in declarative rewrite rules
We allow to generate more ops than what are needed for replacing
the matched root op. Only the last N static values generated are
used as replacement; the others serve as auxiliary ops/values for
building the replacement.

With the introduction of multi-result op support, an op, if used
as a whole, may be used to replace multiple static values of
the matched root op. We need to consider this when calculating
the result range an generated op is to replace.

For example, we can have the following pattern:

```tblgen
def : Pattern<(ThreeResultOp ...),
              [(OneResultOp ...), (OneResultOp ...), (OneResultOp ...)]>;

// Two op to replace all three results
def : Pattern<(ThreeResultOp ...),
              [(TwoResultOp ...), (OneResultOp ...)]>;

// One op to replace all three results
def : Pat<(ThreeResultOp ...), (ThreeResultOp ...)>;

def : Pattern<(ThreeResultOp ...),
              [(AuxiliaryOp ...), (ThreeResultOp ...)]>;
```
PiperOrigin-RevId: 261017235
2019-07-31 16:03:42 -07:00
Lei Zhang e44ba1f8bf NFC: refactor ODS builder generation
Previously we use one single method with lots of branches to
generate multiple builders. This makes the method difficult
to follow and modify. This CL splits the method into multiple
dedicated ones, by extracting common logic into helper methods
while leaving logic specific to each builder in their own
methods.

PiperOrigin-RevId: 261011082
2019-07-31 15:31:13 -07:00
Mahesh Ravishankar cf66d7bb74 Use operand number during serialization to get the <id>s of the operands
During serialization, the operand number must be used to get the
values assocaited with an operand. Using the argument number in Op
specification was wrong since some of the elements in the arguments
list might be attributes on the operation. This resulted in a segfault
during serialization.
Add a test that exercise that path.

PiperOrigin-RevId: 260977758
2019-07-31 12:34:51 -07:00
Denis Khalikov ce358f9b37 [spirv] Add binary arithmetic operations tensorflow/mlir#2.
Add binary operations such as: OpUdiv, OpSDiv, OpUMod, OpSRem, OpSMod.

Closes tensorflow/mlir#56

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/56 from denis0x0D:sandbox/bin_ops_int 4959325a693b4658b978a8b97f79b8237eb39764
PiperOrigin-RevId: 260961681
2019-07-31 11:10:50 -07:00
Mahesh Ravishankar 3867ed86eb Add missing include file to StringExtrasTest.cpp
Use of std::isupper and std::islower need <cctype> header file. Fix
that and also fix the header of a file to match the file name.

PiperOrigin-RevId: 260816852
2019-07-30 16:11:40 -07:00
Alex Zinenko 206be96e63 Support hexadecimal floats in tensor literals
Extend the recently introduced support for hexadecimal float literals to tensor
literals, which may also contain special floating point values such as
infinities and NaNs.

Modify TensorLiteralParser to store the list of tokens representing values
until the type is parsed instead of trying to guess the tensor element type
from the token kinds (hexadecimal values can be either integers or floats, and
can be mixed with both).  Maintain the error reports as close as possible to
the existing implementation to avoid disturbing the tests.  They can be
improved in a separate clean-up if deemed necessary.

PiperOrigin-RevId: 260794716
2019-07-30 14:24:59 -07:00
Mahesh Ravishankar 1de519a753 Add support for (de)serialization of SPIR-V Op Decorations
All non-argument attributes specified for an operation are treated as
decorations on the result value and (de)serialized using OpDecorate
instruction. An error is generated if an attribute is not an argument,
and the name doesn't correspond to a Decoration enum. Name of the
attributes that represent decoerations are to be the snake-case-ified
version of the Decoration name.
Add utility methods to convert to snake-case and camel-case.

PiperOrigin-RevId: 260792638
2019-07-30 14:15:03 -07:00
Alex Zinenko 3b207d3691 Add support for hexadecimal float literals
MLIR does not have support for parsing special floating point values such as
infinities and NaNs.  If programmatically constructed, these values are printed
as NaN and (+-)Inf and cannot be parsed back.  Add parser support for
hexadecimal literals in float attributes, following LLVM IR.  The literal
corresponds to the in-memory representation of the floating point value.
IEEE 754 defines a range of possible values for NaNs, storing the bitwise
representation allows MLIR to properly roundtrip NaNs with different bit values
of significands.

The initial version of this commit was missing support for float literals that
used to be printed in decimal notation as a fallback, but ended up being
printed in hexadecimal format which became the fallback for special values.
The decimal fallback behavior was not exercised by tests.  It is currently
reinstated and tested by the newly added test @f32_potential_precision_loss in
parser.mlir.

PiperOrigin-RevId: 260790900
2019-07-30 14:06:26 -07:00
Mahesh Ravishankar 32f78fe3f2 Link in MLIRGPUtoSPIRVTransforms with mlir-opt
Add a missed library that needs to be linked with mlir-opt. This
results in a test failure in the MLIR due to the pass
`-convert-gpu-to-spirv` not being found.

PiperOrigin-RevId: 260773067
2019-07-30 12:39:43 -07:00
Jacques Pienaar 81a7c322e4 Add std::move in UniformSupport.
Fixes build warnings on clang-8, no warnings on redundant moves on
gcc-(6.5,7.4,8.3).

Closes tensorflow/mlir#41

PiperOrigin-RevId: 260764269
2019-07-30 11:56:16 -07:00
Mahesh Ravishankar ea56025f1e Initial implementation to translate kernel fn in GPU Dialect to SPIR-V Dialect
This CL adds an initial implementation for translation of kernel
function in GPU Dialect (used with a gpu.launch_kernel) op to a
spv.Module. The original function is translated into an entry
function.
Most of the heavy lifting is done by adding TypeConversion and other
utility functions/classes that provide most of the functionality to
translate from Standard Dialect to SPIR-V Dialect. These are intended
to be reusable in implementation of different dialect conversion
pipelines.
Note : Some of the files for have been renamed to be consistent with
the norm used by the other Conversion frameworks.
PiperOrigin-RevId: 260759165
2019-07-30 11:55:55 -07:00
Lei Zhang 4a55bd5f28 [spirv] Add basic infrastructure for negative deserializer tests
We are relying on serializer to construct positive cases to drive
the test for deserializer. This leaves negative cases untested.

This CL adds a basic test fixture for covering the negative
corner cases to enforce a more robust deserializer.

Refactored common SPIR-V building methods out of serializer to
share it with the deserialization test.

PiperOrigin-RevId: 260742733
2019-07-30 11:55:33 -07:00
Denis Khalikov 4598c04dfe [spirv] Add binary arithmetic operations.
Add binary operations such as: OpIAdd, OpFAdd, OpISub, OpFSub, OpIMul,
OpFDiv, OpFRem, OpFMod.

Closes tensorflow/mlir#54

PiperOrigin-RevId: 260734166
2019-07-30 11:55:12 -07:00
Jacques Pienaar 4be7e8627f Remove dead code.
PiperOrigin-RevId: 260585594
2019-07-30 06:17:57 -07:00
Alex Zinenko c7dab559ba RewriterGen: properly handle zero-result ops
RewriterGen was emitting invalid C++ code if the pattern required to create a
zero-result operation due to the absence of a special case that would avoid
generating a spurious comma.  Handle this case.  Also add rewriter tests for
zero-argument operations.

PiperOrigin-RevId: 260576998
2019-07-30 06:17:50 -07:00
Mehdi Amini 395c70c600 Fix SingleBlockImplicitTerminator traits to catch empty blocks
The code was written with the assumption that on failure an error would be
issued by another verifier. However verification is stopping on the first
failure which lead to an empty output. Instead we make sure an error is
displayed.
Also add tests in the test dialect for this trait.

PiperOrigin-RevId: 260541290
2019-07-30 06:17:35 -07:00
Mehdi Amini b910d89264 Simplify ODS for loop.if and loop.for traits (NFC)
There is a wrapper for SingleBlockImplicitTerminator in ODS, this is nicer to
read than using `NativeOpTrait`.

PiperOrigin-RevId: 260539473
2019-07-30 06:17:27 -07:00
Mahesh Ravishankar 673bb7cbbe Enable (de)serialization support for spirv::AccessChainOp
Automatic generation of spirv::AccessChainOp (de)serialization needs
the (de)serialization emitters to handle argument specified as
Variadic<...>. To handle this correctly, this argument can only be
the last entry in the arguments list.
Add a test to (de)serialize spirv::AccessChainOp

PiperOrigin-RevId: 260532598
2019-07-30 06:17:19 -07:00