Commit Graph

3131 Commits

Author SHA1 Message Date
Alexander Belyaev 9630fcbc52 Lower linalg.indexed_generic with libcall to LLVM.
PiperOrigin-RevId: 283328994
2019-12-02 06:30:52 -08:00
Alex Zinenko d5e627f84b Introduce Linkage attribute to the LLVM dialect
LLVM IR supports linkage on global objects such as global variables and
functions. Introduce the Linkage attribute into the LLVM dialect, backed by an
integer storage. Use this attribute on LLVM::GlobalOp and make it mandatory.
Implement parsing/printing of the attribute and conversion to LLVM IR.

See tensorflow/mlir#277.

PiperOrigin-RevId: 283309328
2019-12-02 03:28:10 -08:00
Jacques Pienaar 2235333d58 mlir-tblgen: Dump input records when no generator is set
Follow LLVM's tblgen convention when no generator is set instead of asserting.

PiperOrigin-RevId: 283073690
2019-11-29 10:43:58 -08:00
Jacques Pienaar 52a7415178 Fix redundant convert and use NamedAttributeList as value
* Had leftover call that would result in converting to dictionary attr before
  being implicitedly converted back to NamedAttributeList;
* NamedAttributeList is value typed, so don't use const reference;

PiperOrigin-RevId: 283072576
2019-11-29 10:26:56 -08:00
JKIsaacLee c9721e9a2b Fixed typo in Ch-1 of Toy tutorial
Closes tensorflow/mlir#282

PiperOrigin-RevId: 283064785
2019-11-29 08:48:54 -08:00
Denis Khalikov cd556f25de [spirv] Check that operand of `spirv::CompositeExtractOp` is constant while folding.
Closes tensorflow/mlir#281

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/281 from denis0x0D:sandbox/composite_ex_fold d02d73658bd1b9eaa515eb4e0aee34bc41d4252b
PiperOrigin-RevId: 282971563
2019-11-28 13:27:56 -08:00
Alex Zinenko 2f16bf7ac9 Split out FunctionLike printing/parsing into FunctionImplementation.{h,cpp}
Helper utilies for parsing and printing FunctionLike Ops are only relevant to
the implementation of the Op, not its definition. They depend on
OpImplementation.h and increase the inclusion footprint of FunctionSupport.h,
and do so only to provide some utilities in the "impl" namespace. Move them to
a separate files, similarly to OpDefinition/OpImplementation distinction, and
make only Op implementations use them while keeping headers cleaner. NFC.

PiperOrigin-RevId: 282964556
2019-11-28 11:51:23 -08:00
Jose Ignacio Gomez 0494ef60f7 [Linalg] Change attribute n_loop_types to iterator
This addresses issue tensorflow/mlir#270. Linalg is updated to take the same form
of iterator_types than vector contraction.

Closes tensorflow/mlir#280

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/280 from tetuante:PRissue270 d26d88d090d3765d3b9884bfabdd023143f27287
PiperOrigin-RevId: 282905396
2019-11-28 01:59:55 -08:00
Lei Zhang 5810efe1f1 NFC: A few cleanups for SPIRVLowering
Updated comments and used static instead of anonymous namspace
to hide functions to be consistent with the existing codebase.

PiperOrigin-RevId: 282847784
2019-11-27 15:55:42 -08:00
Lei Zhang a4d7650230 [spirv] NFC: Add getZero() and getOne() static method to ConstantOp
Getting constant zero or one is very common so it merits a special handy
method on spirv::ConstantOp itself.

PiperOrigin-RevId: 282832572
2019-11-27 14:13:01 -08:00
Lei Zhang d4e4387fbf [spirv] Add folders for spv.IAdd and spv.IMul
Adding zero and multiplying one can be common when generating code
for index calculation.

This CL also sorted canonicalize.mlir to alphabetical order.

PiperOrigin-RevId: 282828055
2019-11-27 13:46:52 -08:00
Aart Bik 9f89c34f4b Fixed typo in Toy tutorial (second var e -> var f)
PiperOrigin-RevId: 282810649
2019-11-27 11:58:45 -08:00
Nicolas Vasilache 1fa8c8070b Implement Linalg to loops lowering as a pattern
This CL rewrites the linalg ops to loops transformations as patterns that can be targeted directly from Tablegen. Reliance on OpFolder is removed and to cope with it we introduce local folding patterns that are applied greedily.

PiperOrigin-RevId: 282765550
2019-11-27 07:32:13 -08:00
Aart Bik e2232fbcee [VectorOps] Refine BroadcastOp in VectorOps dialect
Since second argument is always fully overwritten and
shape is define in "to" clause, it is not needed.
Also renamed "into" to "to" now that arg is dropped.

PiperOrigin-RevId: 282686475
2019-11-26 19:52:38 -08:00
Jacques Pienaar f27ceb7261 Add create method that takes equivalent of OperationState with NamedAttributeList
This method is close to creating an OperationState first and then unpacking it
but avoids creating the OperationState and takes a NamedAttributeList for
attributes rather than array of NamedAttribute (to enable reusing an already
created NamedAttributeList).

Reuse this new method via create that takes OperationState. I'll update inferReturnTypes in follow up to also take NamedAttributeList and so a build method that uses both inferReturnTypes and create can reuse the same list.

PiperOrigin-RevId: 282651642
2019-11-26 15:30:35 -08:00
Aart Bik cf97263cb8 [VectorOps] Add a BroadcastOp to the VectorOps dialect
PiperOrigin-RevId: 282643305
2019-11-26 14:43:31 -08:00
David Truby 18aec3e2e5 Add OpenMP dialect to the dialect registry
Closes tensorflow/mlir#244

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/244 from DavidTruby:openmp 30e2638ee678188575dd5aeb3f7fa51d93369f5f
PiperOrigin-RevId: 282607397
2019-11-26 11:38:20 -08:00
Mahesh Ravishankar 03620fa70a Misc changes to lowering to SPIR-V.
These changes to SPIR-V lowering while adding support for lowering
SUbViewOp, but are not directly related.
- Change the lowering of MemRefType to
  !spv.ptr<!spv.struct<!spv.array<...>[offset]>, ..>
  This is consistent with the Vulkan spec.
- To enable testing a simple pattern of lowering functions is added to
  ConvertStandardToSPIRVPass. This is just used to convert the type of
  the arguments of the function. The added function lowering itself is
  not meant to be the way functions are eventually lowered into SPIR-V
  dialect.

PiperOrigin-RevId: 282589644
2019-11-26 10:11:34 -08:00
Nicolas Vasilache 9059cf392d Automated rollback of commit d60133f89b
PiperOrigin-RevId: 282574110
2019-11-26 08:47:48 -08:00
Nicolas Vasilache 109338085d Relax restriction on affine_apply dim and symbol operands
The affine_apply operation is currently "doubly" affine and conflates two things:
1. it applies an affine map to a list of values of type `index` that are defined as either dim or symbol
2. it restricts (and propagates constraints on) the provenance of dims and symbols to a small subset of ops for which more restrictive polyhedral constraints apply.

Point 2. is related to the ability to form so-called static control parts and is related to dependence analysis and legality of transformations.

Point 1. however is completely independent, the only local implication of dims and symbol for affine_apply is that dims compose while symbols concatenate as well as the structural constraint that dims may not be multiplied.

The properties of composition and canonicalization in affine_apply are more generally useful. This CL relaxes the verifier on affine_apply so it can be used more generally.

The relevant affine.for/if/load/store op verifiers already implement the dim and symbol checking.

See this thread for the related discussion: https://groups.google.com/a/tensorflow.org/g/mlir/c/HkwCbV8D9N0/m/8srUNrX6CAAJ

PiperOrigin-RevId: 282562517
2019-11-26 07:39:05 -08:00
Andrew Anderson a50f871e8d Some minor corrections and improvements to LangRef
Some productions in the LangRef were using undefined terminals and non-terminals, which have been added to the EBNF.
The dialect type and dialect attribute productions matched precisely the same structure and have been deduplicated.
The production for ssa-id was ambiguous but the fix is trivial (merging the leading '%') and has been applied.

Closes tensorflow/mlir#265

PiperOrigin-RevId: 282470892
2019-11-25 17:53:52 -08:00
Lei Zhang 13c6e419ca Add support for AttrSizedOperandSegments/AttrSizedResultSegments
Certain operations can have multiple variadic operands and their size
relationship is not always known statically. For such cases, we need
a per-op-instance specification to divide the operands into logical
groups or segments. This can be modeled by attributes.

This CL introduces C++ trait AttrSizedOperandSegments for operands and
AttrSizedResultSegments for results. The C++ trait just guarantees
such size attribute has the correct type (1D vector) and values
(non-negative), etc. It serves as the basis for ODS sugaring that
with ODS argument declarations we can further verify the number of
elements match the number of ODS-declared operands and we can generate
handy getter methods.

PiperOrigin-RevId: 282467075
2019-11-25 17:26:50 -08:00
Nicolas Vasilache 174076a157 Use vector.InsertStridedSlice in Vector -> Vector unrolling
This CL uses the recently added op to finish the implementation of Vector -> Vector unrolling by replacing the "fake join op" by a series of InsertStridedSliceOp.

Test is updated accordingly

PiperOrigin-RevId: 282451126
2019-11-25 15:56:37 -08:00
Nicolas Vasilache 36469f7d2a Add a vector.InsertStridedSliceOp
This new op is the counterpart of vector.StridedSliceOp and will be used for in the pattern rewrites for vector unrolling.

PiperOrigin-RevId: 282447414
2019-11-25 15:37:13 -08:00
MLIR Team 1012c492f0 Allow LLVM::ExtractElementOp to have non-i32 indices.
Also change the text format a bit, so that indices are braced by squares.

PiperOrigin-RevId: 282437095
2019-11-25 14:44:52 -08:00
Ben Vanik 38d7870ee5 Make std.divis and std.diviu support ElementsAttr folding.
PiperOrigin-RevId: 282434465
2019-11-25 14:31:43 -08:00
Mahesh Ravishankar f87b2fd41b NFC: Actually expose the implementation of createGPUToSPIRVLoweringPass.
A mismatch in the function declaration and function definition,
prevented the implementation of the createGPUToSPIRVLoweringPass from
being exposed.

PiperOrigin-RevId: 282419815
2019-11-25 13:19:53 -08:00
Mahesh Ravishankar 7fd46bf258 Add missing rule to generate SPIR-V ABI Attribute using tblgen to CMake.
PiperOrigin-RevId: 282415592
2019-11-25 12:57:16 -08:00
Andy Davis 8fc44a4d13 Update VectorContractionOp to take iterator types and index mapping attributes compatible with linalg ops.
PiperOrigin-RevId: 282412311
2019-11-25 12:40:00 -08:00
Christian Sigg d60133f89b Changing directory shortcut for CPU/GPU runner utils.
Moving cuda-runtime-wrappers.so into subdirectory to match libmlir_runner_utils.so.
Provide parent directory when running test and load .so from subdirectory.

PiperOrigin-RevId: 282410749
2019-11-25 12:30:54 -08:00
Lei Zhang 9b6e6cef68 De-duplicate EnumAttr overrides by defining defaults
EnumAttr should provide meaningful defaults so concrete instances
do not need to duplicate the fields.

PiperOrigin-RevId: 282398431
2019-11-25 11:29:55 -08:00
Mahesh Ravishankar bd485afda0 Introduce attributes that specify the final ABI for a spirv::ModuleOp.
To simplify the lowering into SPIR-V, while still respecting the ABI
requirements of SPIR-V/Vulkan, split the process into two
1) While lowering a function to SPIR-V (when the function is an entry
   point function), allow specifying attributes on arguments and
   function itself that describe the ABI of the function.
2) Add a pass that materializes the ABI described in the function.

Two attributes are needed.
1) Attribute on arguments of the entry point function that describe
   the descriptor_set, binding, storage class, etc, of the
   spv.globalVariable this argument will be replaced by
2) Attribute on function that specifies workgroup size, etc. (for now
   only workgroup size).

Add the pass -spirv-lower-abi-attrs to materialize the ABI described
by the attributes.

This change makes the SPIRVBasicTypeConverter class unnecessary and is
removed, further simplifying the SPIR-V lowering path.

PiperOrigin-RevId: 282387587
2019-11-25 11:19:56 -08:00
Mahesh Ravishankar 1ea231bd39 Allow memref_cast from static strides to dynamic strides.
Memref_cast supports cast from static shape to dynamic shape
memrefs. The same should be true for strides as well, i.e a memref
with static strides can be casted to a memref with dynamic strides.

PiperOrigin-RevId: 282381862
2019-11-25 11:08:56 -08:00
Nicolas Vasilache 01145544aa Add vector.insertelement op
This is the counterpart of vector.extractelement op and has the same
limitations at the moment (static I64IntegerArrayAttr to express position).
This restriction will be filterd in the future.
LLVM lowering will be added in a subsequent commit.

PiperOrigin-RevId: 282365760
2019-11-25 08:47:15 -08:00
Alex Zinenko bf4692dc49 Introduce gpu.func
Introduce a new function-like operation to the GPU dialect to provide a
placeholder for the execution semantic description and to add support for GPU
memory hierarchy.  This aligns with the overall goal of the dialect to expose
the common abstraction layer for GPU devices, in particular by providing an
MLIR unit of semantics (i.e. an operation) for memory modeling.

This proposal has been discussed in the mailing list:
https://groups.google.com/a/tensorflow.org/d/msg/mlir/RfXNP7Hklsc/MBNN7KhjAgAJ
As decided, the "convergence" aspect of the execution model will be factored
out into a new discussion and therefore is not included in this commit. This
commit only introduces the operation but does not hook it up with the remaining
flow. The intention is to develop the new flow while keeping the old flow
operational and do the switch in a simple, separately reversible commit.

PiperOrigin-RevId: 282357599
2019-11-25 08:10:37 -08:00
Ben Vanik d2284f1f0b Support folding of StandardOps with DenseElementsAttr.
PiperOrigin-RevId: 282270243
2019-11-24 19:23:38 -08:00
Lei Zhang ae821fe626 NFC: Wire up DRR settings for SPIR-V canonicalization patterns
This CL added necessary files and settings for using DRR to
write SPIR-V canonicalization patterns and also converted the
patterns for spv.Bitcast and spv.LogicalNot.

PiperOrigin-RevId: 282132786
2019-11-23 06:59:23 -08:00
Lei Zhang aaafeac89b [spirv] NFC: rename test files and sort tests inside
PiperOrigin-RevId: 282132339
2019-11-23 06:58:38 -08:00
Uday Bondhugula 6a101671b0 Make isValidSymbol more powerful
The check in isValidSymbol, as far as a DimOp result went, checked if
the dim op was on a top-level memref. However, any alloc'ed, view, or
subview memref would be fine as long as the corresponding dimension of
that memref is either a static one or was in turn created using a valid
symbol in the case of dynamic dimensions.

Reported-by: Jose Gomez

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#252

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/252 from bondhugula:symbol 7b57dc394df9375e651f497231c6e4525a32a662
PiperOrigin-RevId: 282097114
2019-11-22 22:09:31 -08:00
River Riddle b8ee563449 NFC: Remove unnecessarily guarded tablegen includes.
Support for including a file multiple times was added in tablegen, removing the need for these extra guards. This is because we already insert c/c++ style header guards within each of the specific .td files.

PiperOrigin-RevId: 282076728
2019-11-22 18:01:57 -08:00
Nicolas Vasilache 9a62ec8c96 Fix Windows Build
PiperOrigin-RevId: 282048102
2019-11-22 15:07:31 -08:00
Denis Khalikov a5cda4763f [spirv] Add a canonicalizer for `spirv::LogicalNotOp`.
Add a canonicalizer for `spirv::LogicalNotOp`.
Converts:
* spv.LogicalNot(spv.IEqual(...)) -> spv.INotEqual(...)
* spv.LogicalNot(spv.INotEqual(...)) -> spv.IEqual(...)
* spv.LogicalNot(spv.LogicalEqual(...)) -> spv.LogicalNotEqual(...)
* spv.LogicalNot(spv.LogicalNotEqual(...)) -> spv.LogicalEqual(...)

Also moved the test for spv.IMul to arithemtic tests.

Closes tensorflow/mlir#256

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/256 from denis0x0D:sandbox/canon_logical_not 76ab5787b2c777f948c8978db061d99e76453d44
PiperOrigin-RevId: 282012356
2019-11-22 12:25:52 -08:00
Mahesh Ravishankar 6db8530c26 Add more canonicalizations for SubViewOp.
Depending on which of the offsets, sizes, or strides are constant, the
subview op can be canonicalized in different ways. Add such
canonicalizations, which generalize the existing approach of
canonicalizing subview op only if all of offsets, sizes and shapes are
constants.

PiperOrigin-RevId: 282010703
2019-11-22 12:14:18 -08:00
Lucy Fox 36e8fa84ab Small formatting fix in Tutorial Ch2.
PiperOrigin-RevId: 281998069
2019-11-22 11:04:36 -08:00
Jean-Michel Gorius 104777d8e6 Unify vector op names with other dialects.
Change vector op names from VectorFooOp to Vector_FooOp and from
vector::VectorFooOp to vector::FooOp.

Closes tensorflow/mlir#257

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/257 from Kayjukh:master dfc3a0e04114885aaec8740d5951d6984d6e1577
PiperOrigin-RevId: 281967461
2019-11-22 08:24:49 -08:00
Lucy Fox f7906c9211 Add more detail about locations in Chapter 2 of tutorial.
Resolves issue 241 (tensorflow/mlir#241).

PiperOrigin-RevId: 281867192
2019-11-21 17:26:02 -08:00
Nicolas Vasilache 6755543af5 Move Linalg Transforms that are actually Conversions - NFC
PiperOrigin-RevId: 281844602
2019-11-21 15:41:32 -08:00
River Riddle c35378003c Add support for using the ODS result names as the Asm result names for multi-result operations.
This changes changes the OpDefinitionsGen to automatically add the OpAsmOpInterface for operations with multiple result groups using the provided ODS names. We currently just limit the generation to multi-result ops as most single result operations don't have an interesting name(result/output/etc.). An example is shown below:
// The following operation:
def MyOp : ... {
  let results = (outs AnyType:$first, Variadic<AnyType>:$middle, AnyType);
}

// May now be printed as:
%first, %middle:2, %0 = "my.op" ...

PiperOrigin-RevId: 281834156
2019-11-21 14:55:46 -08:00
Christian Sigg d7c17195a4 Change CUDA tests to use print_memref.
Swap dimensions in all-reduce-op test.

PiperOrigin-RevId: 281791744
2019-11-21 11:26:36 -08:00
River Riddle c621e64150 NFC: Add wrappers around DenseIntElementsAttr/DenseFPElementsAttr::get to avoid the need to cast.
This avoids the need to cast back to the derived type when calling get, i.e. removes the need to do DenseIntElementsAttr::get(...).cast<DenseIntElementsAttr>().

PiperOrigin-RevId: 281772163
2019-11-21 10:08:21 -08:00