This is exposing the basic functionalities (create, nest, addPass, run) of
the PassManager through the C API in the new header: `include/mlir-c/Pass.h`.
In order to exercise it in the unit-test, a basic TableGen backend is
also provided to generate a simple C wrapper around the pass
constructor. It is used to expose the libTransforms passes to the C API.
Reviewed By: stellaraccident, ftynse
Differential Revision: https://reviews.llvm.org/D90667
This header was an initial early attempt at a crude C API for bindings,
but it isn't used and redundant with the new API. At this point it only
contributes to more confusion.
Differential Revision: https://reviews.llvm.org/D90643
Leaking macros isn't a good practice when defining headers. This
requires to duplicate the macro definition in every header though, but
that seems like a better tradeoff right now.
Differential Revision: https://reviews.llvm.org/D90633
* Removes index based insertion. All insertion now happens through the insertion point.
* Introduces thread local context managers for implicit creation relative to an insertion point.
* Introduces (but does not yet use) binding the Context to the thread local context stack. Intent is to refactor all methods to take context optionally and have them use the default if available.
* Adds C APIs for mlirOperationGetParentOperation(), mlirOperationGetBlock() and mlirBlockGetTerminator().
* Removes an assert in PyOperation creation that was incorrectly constraining. There is already a TODO to rework the keepAlive field that it was guarding and without the assert, it is no worse than the current state.
Differential Revision: https://reviews.llvm.org/D90368
Getting the body of a Module is a common need which justifies a
dedicated accessor instead of forcing users to go through the
region->blocks->front unwrapping manually.
Differential Revision: https://reviews.llvm.org/D90287
This patch provides C API for MLIR affine expression.
- Implement C API for methods of AffineExpr class.
- Implement C API for methods of derived classes (AffineBinaryOpExpr, AffineDimExpr, AffineSymbolExpr, and AffineConstantExpr).
Differential Revision: https://reviews.llvm.org/D89856
* Adds a new MlirOpPrintingFlags type and supporting accessors.
* Adds a new mlirOperationPrintWithFlags function.
* Adds a full featured python Operation.print method with all options and the ability to print directly to files/stdout in text or binary.
* Adds an Operation.get_asm which delegates to print and returns a str or bytes.
* Reworks Operation.__str__ to be based on get_asm.
Differential Revision: https://reviews.llvm.org/D89848
Values are ubiquitous in the IR, in particular block argument and operation
results are Values. Define Python classes for BlockArgument, OpResult and their
common ancestor Value. Define pseudo-container classes for lists of block
arguments and operation results, and use these containers to access the
corresponding values in blocks and operations.
Differential Revision: https://reviews.llvm.org/D89778
The Value hierarchy consists of BlockArgument and OpResult, both of which
derive Value. Introduce IsA functions and functions specific to each class,
similarly to other class hierarchies. Also, introduce functions for
pointer-comparison of Block and Operation that are necessary for testing and
are generally useful.
Reviewed By: stellaraccident, mehdi_amini
Differential Revision: https://reviews.llvm.org/D89714
* Also fixes the const-ness of the various DenseElementsAttr construction functions.
* Both issues identified when trying to use the DenseElementsAttr functions.
Differential Revision: https://reviews.llvm.org/D89517
* Extends Context/Operation interning to cover Module as well.
* Implements Module.context, Attribute.context, Type.context, and Location.context back-references (facilitated testing and also on the TODO list).
* Adds method to create an empty Module.
* Discovered missing in npcomp.
Differential Revision: https://reviews.llvm.org/D89294
* New functions: mlirOperationSetAttributeByName, mlirOperationRemoveAttributeByName
* Also adds some *IsNull checks and standardizes the rest to use "static inline" form, which makes them all non-opaque and not part of the ABI (which is desirable).
* Changes needed to resolve TODOs in npcomp PyTorch capture.
Differential Revision: https://reviews.llvm.org/D88946
Add basic support for registering diagnostic handlers with the context
(actually, the diagnostic engine contained in the context) and processing
diagnostic messages from the C API.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D88736
Based on PyAttribute and PyConcreteAttribute classes, this patch implements the bindings of Float Attribute, Integer Attribute and Bool Attribute subclasses.
This patch also defines the `mlirFloatAttrDoubleGetChecked` C API which is bound with the `FloatAttr.get_typed` python method.
Differential Revision: https://reviews.llvm.org/D88531
* Providing stable, C-accessible definitions for bridging MLIR Python<->C APIs, we eliminate inter-extension dependencies (i.e. they can all share a diamond dependency on the MLIR C-API).
* Just provides accessors for context and module right now.
* Needed in NPComp in ~a week or so for high level Torch APIs.
Differential Revision: https://reviews.llvm.org/D88426
- Add a minimalist C API for mlir::Dialect.
- Allow one to query the context about registered and loaded dialects.
- Add API for loading dialects.
- Provide functions to register the Standard dialect.
When used naively, this will require to separately register each dialect. When
we have more than one exposed, we can add variadic macros that expand to
individual calls.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D88162
Blocks in a region and operations in a block are organized in a linked list.
The C API only provides functions to append or to insert elements at the
specified numeric position in the list. The latter is expensive since it
requires to traverse the list. Add insert before/after functionality with low
cost that relies on the iplist elements being convertible to iterators.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D88148
* Removes the half-completed prior attempt at region/block mutation in favor of new approach to ownership.
* Will re-add mutation more correctly in a follow-on.
* Eliminates the detached state on blocks and regions, simplifying the ownership hierarchy.
* Adds both iterator and index based access at each level.
Differential Revision: https://reviews.llvm.org/D87982
* Per thread https://llvm.discourse.group/t/revisiting-ownership-and-lifetime-in-the-python-bindings/1769
* Reworks contexts so it is always possible to get back to a py::object that holds the reference count for an arbitrary MlirContext.
* Retrofits some of the base classes to automatically take a reference to the context, elimintating keep_alives.
* More needs to be done, as discussed, when moving on to the operations/blocks/regions.
Differential Revision: https://reviews.llvm.org/D87886
This patch provides C API for MLIR affine map.
- Implement C API for AffineMap class.
- Add Utils.h to include/mlir/CAPI/, and move the definition of the CallbackOstream to Utils.h to make sure mlirAffineMapPrint work correct.
- Add TODO for exposing the C API related to AffineExpr and mutable affine map.
Differential Revision: https://reviews.llvm.org/D87617
Numerous MLIR functions return instances of `StringRef` to refer to a
non-owning fragment of a string (usually owned by the context). This is a
relatively simple class that is defined in LLVM. Provide a simple wrapper in
the MLIR C API that contains the pointer and length of the string fragment and
use it for Standard attribute functions that return StringRef instead of the
previous, callback-based mechanism.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D87677
Based on the PyType and PyConcreteType classes, this patch implements the bindings of Shaped Type, Tensor Type and MemRef Type subclasses.
The Tensor Type and MemRef Type are bound as ranked and unranked separately.
This patch adds the ***GetChecked C API to make sure the python side can get a valid type or a nullptr.
Shaped type is not a kind of standard types, it is the base class for vectors, memrefs and tensors, this patch binds the PyShapedType class as the base class of Vector Type, Tensor Type and MemRef Type subclasses.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D87091
* This is just enough to create regions/blocks and iterate over them.
* Does not yet implement the preferred iteration strategy (python pseudo containers).
* Refinements need to come after doing basic mappings of operations and values so that the whole hierarchy can be used.
Differential Revision: https://reviews.llvm.org/D86683
* Generic mlir.ir.Attribute class.
* First standard attribute (mlir.ir.StringAttr), following the same pattern as generic vs standard types.
* NamedAttribute class.
Differential Revision: https://reviews.llvm.org/D86250
Provide C API for MLIR standard attributes. Since standard attributes live
under lib/IR in core MLIR, place the C APIs in the IR library as well (standard
ops will go in a separate library).
Affine map and integer set attributes are only exposed as placeholder types
with IsA support due to the lack of C APIs for the corresponding types.
Integer and floating point attribute APIs expecting APInt and APFloat are not
exposed pending decision on how to support APInt and APFloat.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D86143
* The binding for Type is trivial and should be non-controversial.
* The way that I define the IntegerType should serve as a pattern for what I want to do next.
* I propose defining the rest of the standard types in this fashion and then generalizing for dialect types as necessary.
* Essentially, creating/accessing a concrete Type (vs interacting with the string form) is done by "casting" to the concrete type (i.e. IntegerType can be constructed with a Type and will throw if the cast is illegal).
* This deviates from some of our previous discussions about global objects but I think produces a usable API and we should go this way.
Differential Revision: https://reviews.llvm.org/D86179
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
mlir::registerDialect<mlir::standalone::StandaloneDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Provide C API for MLIR standard types. Since standard types live under lib/IR
in core MLIR, place the C APIs in the IR library as well (standard ops will go
into a separate library). This also defines a placeholder for affine maps that
are necessary to construct a memref, but are not yet exposed to the C API.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D86094
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Provide printing functions for most IR objects in C API (except Region that
does not have a `print` function, and Module that is expected to be printed as
Operation instead). The printing is based on a callback that is called with
chunks of the string representation and forwarded user-defined data.
Reviewed By: stellaraccident, Jing, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85748
Using intptr_t is a consensus for MLIR C API, but the change was missing
from 75f239e975 (that was using unsigned initially) due to a
misrebase.
Reviewed By: stellaraccident, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85751
Introduce an initial version of C API for MLIR core IR components: Value, Type,
Attribute, Operation, Region, Block, Location. These APIs allow for both
inspection and creation of the IR in the generic form and intended for wrapping
in high-level library- and language-specific constructs. At this point, there
is no stability guarantee provided for the API.
Reviewed By: stellaraccident, lattner
Differential Revision: https://reviews.llvm.org/D83310