llvm-project/mlir
Lei Zhang 765b77cc70 Better support for attribute wrapper classes when getting def name
Unless we explicitly name a template instantiation in .td file, its def
name will be "anonymous_<number>". We typically give base-level Attr
template instantiation a name by writing `def AnAttr : Attr<...>`. But
when `AnAttr` is further wrapped in classes like OptionalAttr, the name
is lost unless explicitly def'ed again. These implicit-named template
instantiation is fairly common when writing op definitions. Those wrapper
classes are just essentially attaching more information to the attribute.
Without a proper way to trace back to the original attribute def name
can cause problems for consumers wanting to handle attributes according
to their types.

Previously we handled OptionalAttr and DefaultValuedAttr specifically,
but Confined was not supported. And they can compose together to have
Confined<OptionalAttr<...>, [...]>. So this CL moves the baseAttr field
to main Attr class (like isOptional) and set it only on the innermost
wrapper class.

PiperOrigin-RevId: 258341646
2019-07-16 13:45:03 -07:00
..
bindings/python Rename FunctionAttr to SymbolRefAttr. 2019-07-12 08:43:42 -07:00
examples Remove lowerAffineConstructs and lowerControlFlow in favor of providing patterns. 2019-07-16 13:44:45 -07:00
g3doc Fix typos 2019-07-13 05:56:05 -07:00
include Better support for attribute wrapper classes when getting def name 2019-07-16 13:45:03 -07:00
lib Better support for attribute wrapper classes when getting def name 2019-07-16 13:45:03 -07:00
test Replace linalg.for by loop.for 2019-07-16 13:44:57 -07:00
tools Extract std.for std.if and std.terminator in their own dialect 2019-07-16 13:43:18 -07:00
unittests NFC: Rename Module to ModuleOp. 2019-07-10 10:11:21 -07:00
utils Add serialization and deserialization of FuncOps. To support this the 2019-07-12 17:43:03 -07:00
.clang-format [mlir] add .clang-format 2019-03-29 12:41:43 -07:00
CMakeLists.txt Add an mlir-cuda-runner tool. 2019-07-04 07:53:54 -07:00
CONTRIBUTING.md Merge pull request tensorflow/mlir#36 from pkanwar23:patch-2 2019-06-28 17:59:59 -07:00
LICENSE.TXT NFC: Rename Function to FuncOp. 2019-07-10 10:10:53 -07:00
README.md Update readme to reflect accepting contributions. 2019-07-02 10:28:48 -07:00

README.md

Multi-Level Intermediate Representation Overview

The MLIR project aims to define a common intermediate representation (IR) that will unify the infrastructure required to execute high performance machine learning models in TensorFlow and similar ML frameworks. This project will include the application of HPC techniques, along with integration of search algorithms like reinforcement learning. This project aims to reduce the cost to bring up new hardware, and improve usability for existing TensorFlow users.

Note that this repository contains the core of the MLIR framework. The TensorFlow compilers we are building on top of MLIR will be part of the main TensorFlow repository soon.

How to Contribute

Thank you for your interest in contributing to MLIR! If you want to contribute to MLIR, be sure to review the contribution guidelines.

More resources

For more information on MLIR, please see:

Join the MLIR mailing list to hear about announcements and discussions. Please be mindful of the TensorFlow Code of Conduct, which pledges to foster an open and welcoming environment.

What is MLIR for?

MLIR is intended to be a hybrid IR which can support multiple different requirements in a unified infrastructure. For example, this includes:

  • The ability to represent all TensorFlow graphs, including dynamic shapes, the user-extensible op ecosystem, TensorFlow variables, etc.
  • Optimizations and transformations typically done on a TensorFlow graph, e.g. in Grappler.
  • Quantization and other graph transformations done on a TensorFlow graph or the TF Lite representation.
  • Representation of kernels for ML operations in a form suitable for optimization.
  • Ability to host high-performance-computing-style loop optimizations across kernels (fusion, loop interchange, tiling, etc) and to transform memory layouts of data.
  • Code generation "lowering" transformations such as DMA insertion, explicit cache management, memory tiling, and vectorization for 1D and 2D register architectures.
  • Ability to represent target-specific operations, e.g. the MXU on TPUs.

MLIR is a common IR that also supports hardware specific operations. Thus, any investment into the infrastructure surrounding MLIR (e.g. the compiler passes that work on it) should yield good returns; many targets can use that infrastructure and will benefit from it.

MLIR is a powerful representation, but it also has non-goals. We do not try to support low level machine code generation algorithms (like register allocation and instruction scheduling). They are a better fit for lower level optimizers (such as LLVM). Also, we do not intend MLIR to be a source language that end-users would themselves write kernels in (analogous to CUDA C++). While we would love to see a kernel language happen someday, that will be an independent project that compiles down to MLIR.

Compiler infrastructure

We benefited from experience gained from building other IRs (HLO, LLVM and SIL) when building MLIR. We will directly adopt existing best practices, e.g. writing and maintaining an IR spec, building an IR verifier, providing the ability to dump and parse MLIR files to text, writing extensive unit tests with the FileCheck tool, and building the infrastructure as a set of modular libraries that can be combined in new ways. We plan to use the infrastructure developed by the XLA team for performance analysis and benchmarking.

Other lessons have been incorporated and integrated into the design in subtle ways. For example, LLVM has non-obvious design mistakes that prevent a multithreaded compiler from working on multiple functions in an LLVM module at the same time. MLIR solves these problems by having per-function constant pools and by making references explicit with function_ref.

Getting started with MLIR

The following instructions for compiling and testing MLIR assume that you have git, ninja, and a working C++ toolchain. In the future, we aim to align on the same level of platform support as LLVM. For now, MLIR has been tested on Linux and macOS, with recent versions of clang and with gcc 7.

git clone https://github.com/llvm/llvm-project.git
git clone https://github.com/tensorflow/mlir llvm-project/llvm/projects/mlir
mkdir llvm-project/build
cd llvm-project/build
cmake -G Ninja ../llvm -DLLVM_BUILD_EXAMPLES=ON -DLLVM_ENABLE_CXX1Y=Y -DLLVM_TARGETS_TO_BUILD="host"
cmake --build . --target check-mlir

To compile and test on Windows using Visual Studio 2017:

REM In shell with Visual Studio environment set up, e.g., with command such as
REM   <visual-studio-install>\Auxiliary\Build\vcvarsall.bat" x64
REM invoked.
git clone https://github.com/llvm/llvm-project.git
git clone https://github.com/tensorflow/mlir llvm-project\llvm\projects\mlir
mkdir llvm-project\build
cd llvm-project\build
cmake ..\llvm -G "Visual Studio 15 2017 Win64" -DLLVM_BUILD_EXAMPLES=ON -DLLVM_ENABLE_CXX1Y=Y -DLLVM_TARGETS_TO_BUILD="host" -DCMAKE_BUILD_TYPE=Release -Thost=x64
cmake --build . --target check-mlir

As a starter, you may try the tutorial on building a compiler for a Toy language.

MLIR talks