Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
//===- OperationSupport.cpp -----------------------------------------------===//
|
|
|
|
//
|
2020-01-26 11:58:30 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
2019-12-24 01:35:36 +08:00
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
//
|
2019-12-24 01:35:36 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
//
|
|
|
|
// This file contains out-of-line implementations of the support types that
|
2019-03-27 05:45:38 +08:00
|
|
|
// Operation and related classes build on top of.
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#include "mlir/IR/OperationSupport.h"
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
#include "mlir/IR/BuiltinAttributes.h"
|
2020-12-04 09:22:29 +08:00
|
|
|
#include "mlir/IR/BuiltinTypes.h"
|
2020-04-30 07:09:11 +08:00
|
|
|
#include "mlir/IR/OpDefinition.h"
|
2021-03-10 07:02:03 +08:00
|
|
|
#include "llvm/ADT/BitVector.h"
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
#include <numeric>
|
2021-03-10 07:02:03 +08:00
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
using namespace mlir;
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2020-05-07 04:48:36 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// NamedAttrList
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
NamedAttrList::NamedAttrList(ArrayRef<NamedAttribute> attributes) {
|
|
|
|
assign(attributes.begin(), attributes.end());
|
|
|
|
}
|
|
|
|
|
2020-12-18 09:10:12 +08:00
|
|
|
NamedAttrList::NamedAttrList(DictionaryAttr attributes)
|
|
|
|
: NamedAttrList(attributes ? attributes.getValue()
|
|
|
|
: ArrayRef<NamedAttribute>()) {
|
|
|
|
dictionarySorted.setPointerAndInt(attributes, true);
|
|
|
|
}
|
|
|
|
|
2021-12-21 03:45:05 +08:00
|
|
|
NamedAttrList::NamedAttrList(const_iterator inStart, const_iterator inEnd) {
|
|
|
|
assign(inStart, inEnd);
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
ArrayRef<NamedAttribute> NamedAttrList::getAttrs() const { return attrs; }
|
|
|
|
|
2020-11-04 06:31:23 +08:00
|
|
|
Optional<NamedAttribute> NamedAttrList::findDuplicate() const {
|
|
|
|
Optional<NamedAttribute> duplicate =
|
|
|
|
DictionaryAttr::findDuplicate(attrs, isSorted());
|
|
|
|
// DictionaryAttr::findDuplicate will sort the list, so reset the sorted
|
|
|
|
// state.
|
|
|
|
if (!isSorted())
|
|
|
|
dictionarySorted.setPointerAndInt(nullptr, true);
|
|
|
|
return duplicate;
|
|
|
|
}
|
|
|
|
|
2020-05-07 04:48:36 +08:00
|
|
|
DictionaryAttr NamedAttrList::getDictionary(MLIRContext *context) const {
|
|
|
|
if (!isSorted()) {
|
|
|
|
DictionaryAttr::sortInPlace(attrs);
|
|
|
|
dictionarySorted.setPointerAndInt(nullptr, true);
|
|
|
|
}
|
|
|
|
if (!dictionarySorted.getPointer())
|
2021-03-05 04:37:32 +08:00
|
|
|
dictionarySorted.setPointer(DictionaryAttr::getWithSorted(context, attrs));
|
2020-05-07 04:48:36 +08:00
|
|
|
return dictionarySorted.getPointer().cast<DictionaryAttr>();
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Add an attribute with the specified name.
|
|
|
|
void NamedAttrList::append(StringRef name, Attribute attr) {
|
2021-11-17 01:21:15 +08:00
|
|
|
append(StringAttr::get(attr.getContext(), name), attr);
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Replaces the attributes with new list of attributes.
|
2021-12-21 03:45:05 +08:00
|
|
|
void NamedAttrList::assign(const_iterator inStart, const_iterator inEnd) {
|
|
|
|
DictionaryAttr::sort(ArrayRef<NamedAttribute>{inStart, inEnd}, attrs);
|
2020-05-07 04:48:36 +08:00
|
|
|
dictionarySorted.setPointerAndInt(nullptr, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
void NamedAttrList::push_back(NamedAttribute newAttribute) {
|
2021-11-18 13:23:32 +08:00
|
|
|
if (isSorted())
|
|
|
|
dictionarySorted.setInt(attrs.empty() || attrs.back() < newAttribute);
|
2020-05-07 04:48:36 +08:00
|
|
|
dictionarySorted.setPointer(nullptr);
|
|
|
|
attrs.push_back(newAttribute);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Return the specified attribute if present, null otherwise.
|
|
|
|
Attribute NamedAttrList::get(StringRef name) const {
|
2021-11-05 04:34:01 +08:00
|
|
|
auto it = findAttr(*this, name);
|
2021-11-18 13:23:32 +08:00
|
|
|
return it.second ? it.first->getValue() : Attribute();
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
2021-11-17 01:21:15 +08:00
|
|
|
Attribute NamedAttrList::get(StringAttr name) const {
|
2021-11-05 04:34:01 +08:00
|
|
|
auto it = findAttr(*this, name);
|
2021-11-18 13:23:32 +08:00
|
|
|
return it.second ? it.first->getValue() : Attribute();
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Return the specified named attribute if present, None otherwise.
|
|
|
|
Optional<NamedAttribute> NamedAttrList::getNamed(StringRef name) const {
|
2021-11-05 04:34:01 +08:00
|
|
|
auto it = findAttr(*this, name);
|
|
|
|
return it.second ? *it.first : Optional<NamedAttribute>();
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
2021-11-17 01:21:15 +08:00
|
|
|
Optional<NamedAttribute> NamedAttrList::getNamed(StringAttr name) const {
|
2021-11-05 04:34:01 +08:00
|
|
|
auto it = findAttr(*this, name);
|
|
|
|
return it.second ? *it.first : Optional<NamedAttribute>();
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/// If the an attribute exists with the specified name, change it to the new
|
|
|
|
/// value. Otherwise, add a new attribute with the specified name/value.
|
2021-11-17 01:21:15 +08:00
|
|
|
Attribute NamedAttrList::set(StringAttr name, Attribute value) {
|
2020-05-07 04:48:36 +08:00
|
|
|
assert(value && "attributes may never be null");
|
|
|
|
|
2021-11-05 04:34:01 +08:00
|
|
|
// Look for an existing attribute with the given name, and set its value
|
|
|
|
// in-place. Return the previous value of the attribute, if there was one.
|
|
|
|
auto it = findAttr(*this, name);
|
|
|
|
if (it.second) {
|
|
|
|
// Update the existing attribute by swapping out the old value for the new
|
|
|
|
// value. Return the old value.
|
2021-11-18 13:23:32 +08:00
|
|
|
Attribute oldValue = it.first->getValue();
|
|
|
|
if (it.first->getValue() != value) {
|
|
|
|
it.first->setValue(value);
|
|
|
|
|
2021-11-05 04:34:01 +08:00
|
|
|
// If the attributes have changed, the dictionary is invalidated.
|
2020-12-18 09:10:12 +08:00
|
|
|
dictionarySorted.setPointer(nullptr);
|
|
|
|
}
|
2021-11-18 13:23:32 +08:00
|
|
|
return oldValue;
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
2021-11-05 04:34:01 +08:00
|
|
|
// Perform a string lookup to insert the new attribute into its sorted
|
|
|
|
// position.
|
|
|
|
if (isSorted())
|
|
|
|
it = findAttr(*this, name.strref());
|
|
|
|
attrs.insert(it.first, {name, value});
|
|
|
|
// Invalidate the dictionary. Return null as there was no previous value.
|
2020-05-07 04:48:36 +08:00
|
|
|
dictionarySorted.setPointer(nullptr);
|
2020-12-18 09:10:12 +08:00
|
|
|
return Attribute();
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
2021-11-05 04:34:01 +08:00
|
|
|
|
2020-12-18 09:10:12 +08:00
|
|
|
Attribute NamedAttrList::set(StringRef name, Attribute value) {
|
2021-11-05 04:34:01 +08:00
|
|
|
assert(value && "attributes may never be null");
|
2021-11-17 01:21:15 +08:00
|
|
|
return set(mlir::StringAttr::get(value.getContext(), name), value);
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
|
|
|
|
2020-10-27 02:29:28 +08:00
|
|
|
Attribute
|
|
|
|
NamedAttrList::eraseImpl(SmallVectorImpl<NamedAttribute>::iterator it) {
|
|
|
|
// Erasing does not affect the sorted property.
|
2021-11-18 13:23:32 +08:00
|
|
|
Attribute attr = it->getValue();
|
2020-10-27 02:29:28 +08:00
|
|
|
attrs.erase(it);
|
|
|
|
dictionarySorted.setPointer(nullptr);
|
|
|
|
return attr;
|
|
|
|
}
|
|
|
|
|
2021-11-17 01:21:15 +08:00
|
|
|
Attribute NamedAttrList::erase(StringAttr name) {
|
2021-11-05 04:34:01 +08:00
|
|
|
auto it = findAttr(*this, name);
|
|
|
|
return it.second ? eraseImpl(it.first) : Attribute();
|
2020-10-27 02:29:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
Attribute NamedAttrList::erase(StringRef name) {
|
2021-11-05 04:34:01 +08:00
|
|
|
auto it = findAttr(*this, name);
|
|
|
|
return it.second ? eraseImpl(it.first) : Attribute();
|
2020-10-27 02:29:28 +08:00
|
|
|
}
|
|
|
|
|
2020-05-07 04:48:36 +08:00
|
|
|
NamedAttrList &
|
|
|
|
NamedAttrList::operator=(const SmallVectorImpl<NamedAttribute> &rhs) {
|
|
|
|
assign(rhs.begin(), rhs.end());
|
|
|
|
return *this;
|
|
|
|
}
|
|
|
|
|
|
|
|
NamedAttrList::operator ArrayRef<NamedAttribute>() const { return attrs; }
|
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// OperationState
|
|
|
|
//===----------------------------------------------------------------------===//
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2019-06-23 02:08:52 +08:00
|
|
|
OperationState::OperationState(Location location, StringRef name)
|
2019-08-27 08:34:06 +08:00
|
|
|
: location(location), name(name, location->getContext()) {}
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2019-06-23 02:08:52 +08:00
|
|
|
OperationState::OperationState(Location location, OperationName name)
|
2019-08-27 08:34:06 +08:00
|
|
|
: location(location), name(name) {}
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2022-01-13 15:46:59 +08:00
|
|
|
OperationState::OperationState(Location location, OperationName name,
|
2020-09-23 12:06:25 +08:00
|
|
|
ValueRange operands, TypeRange types,
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
ArrayRef<NamedAttribute> attributes,
|
2020-09-23 12:06:25 +08:00
|
|
|
BlockRange successors,
|
2020-04-27 12:28:22 +08:00
|
|
|
MutableArrayRef<std::unique_ptr<Region>> regions)
|
2022-01-13 15:46:59 +08:00
|
|
|
: location(location), name(name),
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
operands(operands.begin(), operands.end()),
|
|
|
|
types(types.begin(), types.end()),
|
|
|
|
attributes(attributes.begin(), attributes.end()),
|
|
|
|
successors(successors.begin(), successors.end()) {
|
2019-08-27 08:34:06 +08:00
|
|
|
for (std::unique_ptr<Region> &r : regions)
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
this->regions.push_back(std::move(r));
|
|
|
|
}
|
2022-01-13 15:46:59 +08:00
|
|
|
OperationState::OperationState(Location location, StringRef name,
|
|
|
|
ValueRange operands, TypeRange types,
|
|
|
|
ArrayRef<NamedAttribute> attributes,
|
|
|
|
BlockRange successors,
|
|
|
|
MutableArrayRef<std::unique_ptr<Region>> regions)
|
|
|
|
: OperationState(location, OperationName(name, location.getContext()),
|
|
|
|
operands, types, attributes, successors, regions) {}
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2019-12-08 02:35:01 +08:00
|
|
|
void OperationState::addOperands(ValueRange newOperands) {
|
|
|
|
operands.append(newOperands.begin(), newOperands.end());
|
|
|
|
}
|
|
|
|
|
2020-09-03 06:33:19 +08:00
|
|
|
void OperationState::addSuccessors(BlockRange newSuccessors) {
|
2020-03-06 04:48:28 +08:00
|
|
|
successors.append(newSuccessors.begin(), newSuccessors.end());
|
2019-12-08 02:35:01 +08:00
|
|
|
}
|
|
|
|
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
Region *OperationState::addRegion() {
|
|
|
|
regions.emplace_back(new Region);
|
|
|
|
return regions.back().get();
|
|
|
|
}
|
|
|
|
|
|
|
|
void OperationState::addRegion(std::unique_ptr<Region> &®ion) {
|
|
|
|
regions.push_back(std::move(region));
|
|
|
|
}
|
|
|
|
|
2020-09-01 03:33:55 +08:00
|
|
|
void OperationState::addRegions(
|
|
|
|
MutableArrayRef<std::unique_ptr<Region>> regions) {
|
|
|
|
for (std::unique_ptr<Region> ®ion : regions)
|
|
|
|
addRegion(std::move(region));
|
|
|
|
}
|
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// OperandStorage
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2021-11-04 02:22:49 +08:00
|
|
|
detail::OperandStorage::OperandStorage(Operation *owner,
|
|
|
|
OpOperand *trailingOperands,
|
|
|
|
ValueRange values)
|
|
|
|
: isStorageDynamic(false), operandStorage(trailingOperands) {
|
|
|
|
numOperands = capacity = values.size();
|
|
|
|
for (unsigned i = 0; i < numOperands; ++i)
|
|
|
|
new (&operandStorage[i]) OpOperand(owner, values[i]);
|
2020-04-27 12:28:22 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
detail::OperandStorage::~OperandStorage() {
|
2021-11-04 02:22:49 +08:00
|
|
|
for (auto &operand : getOperands())
|
|
|
|
operand.~OpOperand();
|
|
|
|
|
|
|
|
// If the storage is dynamic, deallocate it.
|
|
|
|
if (isStorageDynamic)
|
|
|
|
free(operandStorage);
|
2020-04-27 12:28:22 +08:00
|
|
|
}
|
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
/// Replace the operands contained in the storage with the ones provided in
|
2020-04-27 12:28:22 +08:00
|
|
|
/// 'values'.
|
|
|
|
void detail::OperandStorage::setOperands(Operation *owner, ValueRange values) {
|
|
|
|
MutableArrayRef<OpOperand> storageOperands = resize(owner, values.size());
|
|
|
|
for (unsigned i = 0, e = values.size(); i != e; ++i)
|
|
|
|
storageOperands[i].set(values[i]);
|
|
|
|
}
|
|
|
|
|
2020-04-30 07:09:11 +08:00
|
|
|
/// Replace the operands beginning at 'start' and ending at 'start' + 'length'
|
|
|
|
/// with the ones provided in 'operands'. 'operands' may be smaller or larger
|
|
|
|
/// than the range pointed to by 'start'+'length'.
|
|
|
|
void detail::OperandStorage::setOperands(Operation *owner, unsigned start,
|
|
|
|
unsigned length, ValueRange operands) {
|
|
|
|
// If the new size is the same, we can update inplace.
|
|
|
|
unsigned newSize = operands.size();
|
|
|
|
if (newSize == length) {
|
|
|
|
MutableArrayRef<OpOperand> storageOperands = getOperands();
|
|
|
|
for (unsigned i = 0, e = length; i != e; ++i)
|
|
|
|
storageOperands[start + i].set(operands[i]);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
// If the new size is greater, remove the extra operands and set the rest
|
|
|
|
// inplace.
|
|
|
|
if (newSize < length) {
|
|
|
|
eraseOperands(start + operands.size(), length - newSize);
|
|
|
|
setOperands(owner, start, newSize, operands);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
// Otherwise, the new size is greater so we need to grow the storage.
|
|
|
|
auto storageOperands = resize(owner, size() + (newSize - length));
|
|
|
|
|
|
|
|
// Shift operands to the right to make space for the new operands.
|
|
|
|
unsigned rotateSize = storageOperands.size() - (start + length);
|
|
|
|
auto rbegin = storageOperands.rbegin();
|
|
|
|
std::rotate(rbegin, std::next(rbegin, newSize - length), rbegin + rotateSize);
|
|
|
|
|
|
|
|
// Update the operands inplace.
|
|
|
|
for (unsigned i = 0, e = operands.size(); i != e; ++i)
|
|
|
|
storageOperands[start + i].set(operands[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Erase an operand held by the storage.
|
|
|
|
void detail::OperandStorage::eraseOperands(unsigned start, unsigned length) {
|
2021-11-04 02:22:49 +08:00
|
|
|
MutableArrayRef<OpOperand> operands = getOperands();
|
2020-04-30 07:09:11 +08:00
|
|
|
assert((start + length) <= operands.size());
|
2021-11-04 02:22:49 +08:00
|
|
|
numOperands -= length;
|
2020-04-30 07:09:11 +08:00
|
|
|
|
|
|
|
// Shift all operands down if the operand to remove is not at the end.
|
2021-11-04 02:22:49 +08:00
|
|
|
if (start != numOperands) {
|
2020-05-07 04:48:36 +08:00
|
|
|
auto *indexIt = std::next(operands.begin(), start);
|
2020-04-30 07:09:11 +08:00
|
|
|
std::rotate(indexIt, std::next(indexIt, length), operands.end());
|
|
|
|
}
|
|
|
|
for (unsigned i = 0; i != length; ++i)
|
2021-11-04 02:22:49 +08:00
|
|
|
operands[numOperands + i].~OpOperand();
|
2020-04-30 07:09:11 +08:00
|
|
|
}
|
|
|
|
|
2021-03-10 07:02:03 +08:00
|
|
|
void detail::OperandStorage::eraseOperands(
|
|
|
|
const llvm::BitVector &eraseIndices) {
|
2021-11-04 02:22:49 +08:00
|
|
|
MutableArrayRef<OpOperand> operands = getOperands();
|
2021-03-10 07:02:03 +08:00
|
|
|
assert(eraseIndices.size() == operands.size());
|
|
|
|
|
|
|
|
// Check that at least one operand is erased.
|
|
|
|
int firstErasedIndice = eraseIndices.find_first();
|
|
|
|
if (firstErasedIndice == -1)
|
|
|
|
return;
|
|
|
|
|
|
|
|
// Shift all of the removed operands to the end, and destroy them.
|
2021-11-04 02:22:49 +08:00
|
|
|
numOperands = firstErasedIndice;
|
2021-03-10 07:02:03 +08:00
|
|
|
for (unsigned i = firstErasedIndice + 1, e = operands.size(); i < e; ++i)
|
|
|
|
if (!eraseIndices.test(i))
|
2021-11-04 02:22:49 +08:00
|
|
|
operands[numOperands++] = std::move(operands[i]);
|
|
|
|
for (OpOperand &operand : operands.drop_front(numOperands))
|
2021-03-10 07:02:03 +08:00
|
|
|
operand.~OpOperand();
|
|
|
|
}
|
|
|
|
|
2020-04-27 12:28:22 +08:00
|
|
|
/// Resize the storage to the given size. Returns the array containing the new
|
|
|
|
/// operands.
|
|
|
|
MutableArrayRef<OpOperand> detail::OperandStorage::resize(Operation *owner,
|
|
|
|
unsigned newSize) {
|
2019-03-27 05:45:38 +08:00
|
|
|
// If the number of operands is less than or equal to the current amount, we
|
|
|
|
// can just update in place.
|
2021-11-04 02:22:49 +08:00
|
|
|
MutableArrayRef<OpOperand> origOperands = getOperands();
|
2020-04-27 12:28:22 +08:00
|
|
|
if (newSize <= numOperands) {
|
|
|
|
// If the number of new size is less than the current, remove any extra
|
|
|
|
// operands.
|
|
|
|
for (unsigned i = newSize; i != numOperands; ++i)
|
2021-11-04 02:22:49 +08:00
|
|
|
origOperands[i].~OpOperand();
|
2020-04-27 12:28:22 +08:00
|
|
|
numOperands = newSize;
|
2021-11-04 02:22:49 +08:00
|
|
|
return origOperands.take_front(newSize);
|
2019-03-27 05:45:38 +08:00
|
|
|
}
|
|
|
|
|
2020-04-27 12:28:22 +08:00
|
|
|
// If the new size is within the original inline capacity, grow inplace.
|
2021-11-04 02:22:49 +08:00
|
|
|
if (newSize <= capacity) {
|
|
|
|
OpOperand *opBegin = origOperands.data();
|
2020-04-27 12:28:22 +08:00
|
|
|
for (unsigned e = newSize; numOperands != e; ++numOperands)
|
|
|
|
new (&opBegin[numOperands]) OpOperand(owner);
|
|
|
|
return MutableArrayRef<OpOperand>(opBegin, newSize);
|
2020-04-27 12:28:11 +08:00
|
|
|
}
|
2019-03-27 05:45:38 +08:00
|
|
|
|
2020-04-27 12:28:22 +08:00
|
|
|
// Otherwise, we need to allocate a new storage.
|
|
|
|
unsigned newCapacity =
|
2021-11-04 02:22:49 +08:00
|
|
|
std::max(unsigned(llvm::NextPowerOf2(capacity + 2)), newSize);
|
|
|
|
OpOperand *newOperandStorage =
|
|
|
|
reinterpret_cast<OpOperand *>(malloc(sizeof(OpOperand) * newCapacity));
|
2020-04-27 12:28:22 +08:00
|
|
|
|
|
|
|
// Move the current operands to the new storage.
|
2021-11-04 02:22:49 +08:00
|
|
|
MutableArrayRef<OpOperand> newOperands(newOperandStorage, newSize);
|
|
|
|
std::uninitialized_copy(std::make_move_iterator(origOperands.begin()),
|
|
|
|
std::make_move_iterator(origOperands.end()),
|
2020-04-27 12:28:22 +08:00
|
|
|
newOperands.begin());
|
|
|
|
|
|
|
|
// Destroy the original operands.
|
2021-11-04 02:22:49 +08:00
|
|
|
for (auto &operand : origOperands)
|
2020-04-27 12:28:22 +08:00
|
|
|
operand.~OpOperand();
|
|
|
|
|
|
|
|
// Initialize any new operands.
|
|
|
|
for (unsigned e = newSize; numOperands != e; ++numOperands)
|
|
|
|
new (&newOperands[numOperands]) OpOperand(owner);
|
|
|
|
|
2021-11-04 02:22:49 +08:00
|
|
|
// If the current storage is dynamic, free it.
|
|
|
|
if (isStorageDynamic)
|
|
|
|
free(operandStorage);
|
2020-04-27 12:28:22 +08:00
|
|
|
|
|
|
|
// Update the storage representation to use the new dynamic storage.
|
2021-11-04 02:22:49 +08:00
|
|
|
operandStorage = newOperandStorage;
|
|
|
|
capacity = newCapacity;
|
|
|
|
isStorageDynamic = true;
|
2020-04-27 12:28:22 +08:00
|
|
|
return newOperands;
|
2019-03-27 05:45:38 +08:00
|
|
|
}
|
|
|
|
|
2019-12-11 05:20:50 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// Operation Value-Iterators
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2020-02-19 03:36:53 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
2019-12-11 05:20:50 +08:00
|
|
|
// OperandRange
|
|
|
|
|
2020-03-06 04:40:23 +08:00
|
|
|
unsigned OperandRange::getBeginOperandIndex() const {
|
|
|
|
assert(!empty() && "range must not be empty");
|
|
|
|
return base->getOperandNumber();
|
|
|
|
}
|
|
|
|
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
OperandRangeRange OperandRange::split(ElementsAttr segmentSizes) const {
|
|
|
|
return OperandRangeRange(*this, segmentSizes);
|
|
|
|
}
|
|
|
|
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// OperandRangeRange
|
|
|
|
|
|
|
|
OperandRangeRange::OperandRangeRange(OperandRange operands,
|
|
|
|
Attribute operandSegments)
|
|
|
|
: OperandRangeRange(OwnerT(operands.getBase(), operandSegments), 0,
|
|
|
|
operandSegments.cast<DenseElementsAttr>().size()) {}
|
|
|
|
|
|
|
|
OperandRange OperandRangeRange::join() const {
|
|
|
|
const OwnerT &owner = getBase();
|
|
|
|
auto sizeData = owner.second.cast<DenseElementsAttr>().getValues<uint32_t>();
|
|
|
|
return OperandRange(owner.first,
|
|
|
|
std::accumulate(sizeData.begin(), sizeData.end(), 0));
|
|
|
|
}
|
|
|
|
|
|
|
|
OperandRange OperandRangeRange::dereference(const OwnerT &object,
|
|
|
|
ptrdiff_t index) {
|
|
|
|
auto sizeData = object.second.cast<DenseElementsAttr>().getValues<uint32_t>();
|
|
|
|
uint32_t startIndex =
|
|
|
|
std::accumulate(sizeData.begin(), sizeData.begin() + index, 0);
|
|
|
|
return OperandRange(object.first + startIndex, *(sizeData.begin() + index));
|
|
|
|
}
|
|
|
|
|
2020-04-30 07:09:11 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// MutableOperandRange
|
|
|
|
|
|
|
|
/// Construct a new mutable range from the given operand, operand start index,
|
|
|
|
/// and range length.
|
|
|
|
MutableOperandRange::MutableOperandRange(
|
|
|
|
Operation *owner, unsigned start, unsigned length,
|
|
|
|
ArrayRef<OperandSegment> operandSegments)
|
|
|
|
: owner(owner), start(start), length(length),
|
|
|
|
operandSegments(operandSegments.begin(), operandSegments.end()) {
|
|
|
|
assert((start + length) <= owner->getNumOperands() && "invalid range");
|
|
|
|
}
|
|
|
|
MutableOperandRange::MutableOperandRange(Operation *owner)
|
|
|
|
: MutableOperandRange(owner, /*start=*/0, owner->getNumOperands()) {}
|
|
|
|
|
2020-04-30 07:09:43 +08:00
|
|
|
/// Slice this range into a sub range, with the additional operand segment.
|
|
|
|
MutableOperandRange
|
|
|
|
MutableOperandRange::slice(unsigned subStart, unsigned subLen,
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
Optional<OperandSegment> segment) const {
|
2020-04-30 07:09:43 +08:00
|
|
|
assert((subStart + subLen) <= length && "invalid sub-range");
|
|
|
|
MutableOperandRange subSlice(owner, start + subStart, subLen,
|
|
|
|
operandSegments);
|
|
|
|
if (segment)
|
|
|
|
subSlice.operandSegments.push_back(*segment);
|
|
|
|
return subSlice;
|
|
|
|
}
|
|
|
|
|
2020-04-30 07:09:11 +08:00
|
|
|
/// Append the given values to the range.
|
|
|
|
void MutableOperandRange::append(ValueRange values) {
|
|
|
|
if (values.empty())
|
|
|
|
return;
|
|
|
|
owner->insertOperands(start + length, values);
|
|
|
|
updateLength(length + values.size());
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Assign this range to the given values.
|
|
|
|
void MutableOperandRange::assign(ValueRange values) {
|
|
|
|
owner->setOperands(start, length, values);
|
|
|
|
if (length != values.size())
|
|
|
|
updateLength(/*newLength=*/values.size());
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Assign the range to the given value.
|
|
|
|
void MutableOperandRange::assign(Value value) {
|
|
|
|
if (length == 1) {
|
|
|
|
owner->setOperand(start, value);
|
|
|
|
} else {
|
|
|
|
owner->setOperands(start, length, value);
|
|
|
|
updateLength(/*newLength=*/1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Erase the operands within the given sub-range.
|
|
|
|
void MutableOperandRange::erase(unsigned subStart, unsigned subLen) {
|
|
|
|
assert((subStart + subLen) <= length && "invalid sub-range");
|
|
|
|
if (length == 0)
|
|
|
|
return;
|
|
|
|
owner->eraseOperands(start + subStart, subLen);
|
|
|
|
updateLength(length - subLen);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Clear this range and erase all of the operands.
|
|
|
|
void MutableOperandRange::clear() {
|
|
|
|
if (length != 0) {
|
|
|
|
owner->eraseOperands(start, length);
|
|
|
|
updateLength(/*newLength=*/0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Allow implicit conversion to an OperandRange.
|
|
|
|
MutableOperandRange::operator OperandRange() const {
|
|
|
|
return owner->getOperands().slice(start, length);
|
|
|
|
}
|
|
|
|
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
MutableOperandRangeRange
|
|
|
|
MutableOperandRange::split(NamedAttribute segmentSizes) const {
|
|
|
|
return MutableOperandRangeRange(*this, segmentSizes);
|
|
|
|
}
|
|
|
|
|
2020-04-30 07:09:11 +08:00
|
|
|
/// Update the length of this range to the one provided.
|
|
|
|
void MutableOperandRange::updateLength(unsigned newLength) {
|
|
|
|
int32_t diff = int32_t(newLength) - int32_t(length);
|
|
|
|
length = newLength;
|
|
|
|
|
|
|
|
// Update any of the provided segment attributes.
|
|
|
|
for (OperandSegment &segment : operandSegments) {
|
2021-11-18 13:23:32 +08:00
|
|
|
auto attr = segment.second.getValue().cast<DenseIntElementsAttr>();
|
2020-04-30 07:09:11 +08:00
|
|
|
SmallVector<int32_t, 8> segments(attr.getValues<int32_t>());
|
|
|
|
segments[segment.first] += diff;
|
2021-11-18 13:23:32 +08:00
|
|
|
segment.second.setValue(
|
|
|
|
DenseIntElementsAttr::get(attr.getType(), segments));
|
|
|
|
owner->setAttr(segment.second.getName(), segment.second.getValue());
|
2020-04-30 07:09:11 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// MutableOperandRangeRange
|
|
|
|
|
|
|
|
MutableOperandRangeRange::MutableOperandRangeRange(
|
|
|
|
const MutableOperandRange &operands, NamedAttribute operandSegmentAttr)
|
|
|
|
: MutableOperandRangeRange(
|
|
|
|
OwnerT(operands, operandSegmentAttr), 0,
|
2021-11-18 13:23:32 +08:00
|
|
|
operandSegmentAttr.getValue().cast<DenseElementsAttr>().size()) {}
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
|
|
|
|
MutableOperandRange MutableOperandRangeRange::join() const {
|
|
|
|
return getBase().first;
|
|
|
|
}
|
|
|
|
|
|
|
|
MutableOperandRangeRange::operator OperandRangeRange() const {
|
2021-11-18 13:23:32 +08:00
|
|
|
return OperandRangeRange(
|
|
|
|
getBase().first, getBase().second.getValue().cast<DenseElementsAttr>());
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
MutableOperandRange MutableOperandRangeRange::dereference(const OwnerT &object,
|
|
|
|
ptrdiff_t index) {
|
|
|
|
auto sizeData =
|
2021-11-18 13:23:32 +08:00
|
|
|
object.second.getValue().cast<DenseElementsAttr>().getValues<uint32_t>();
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
uint32_t startIndex =
|
|
|
|
std::accumulate(sizeData.begin(), sizeData.begin() + index, 0);
|
|
|
|
return object.first.slice(
|
|
|
|
startIndex, *(sizeData.begin() + index),
|
|
|
|
MutableOperandRange::OperandSegment(index, object.second));
|
|
|
|
}
|
|
|
|
|
2021-08-25 17:26:23 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// ResultRange
|
|
|
|
|
2021-10-27 10:00:10 +08:00
|
|
|
ResultRange::ResultRange(OpResult result)
|
|
|
|
: ResultRange(static_cast<detail::OpResultImpl *>(Value(result).getImpl()),
|
|
|
|
1) {}
|
|
|
|
|
2021-08-25 17:26:23 +08:00
|
|
|
ResultRange::use_range ResultRange::getUses() const {
|
|
|
|
return {use_begin(), use_end()};
|
|
|
|
}
|
|
|
|
ResultRange::use_iterator ResultRange::use_begin() const {
|
|
|
|
return use_iterator(*this);
|
|
|
|
}
|
|
|
|
ResultRange::use_iterator ResultRange::use_end() const {
|
|
|
|
return use_iterator(*this, /*end=*/true);
|
|
|
|
}
|
|
|
|
ResultRange::user_range ResultRange::getUsers() {
|
|
|
|
return {user_begin(), user_end()};
|
|
|
|
}
|
|
|
|
ResultRange::user_iterator ResultRange::user_begin() {
|
|
|
|
return user_iterator(use_begin());
|
|
|
|
}
|
|
|
|
ResultRange::user_iterator ResultRange::user_end() {
|
|
|
|
return user_iterator(use_end());
|
|
|
|
}
|
|
|
|
|
|
|
|
ResultRange::UseIterator::UseIterator(ResultRange results, bool end)
|
|
|
|
: it(end ? results.end() : results.begin()), endIt(results.end()) {
|
|
|
|
// Only initialize current use if there are results/can be uses.
|
|
|
|
if (it != endIt)
|
|
|
|
skipOverResultsWithNoUsers();
|
|
|
|
}
|
|
|
|
|
|
|
|
ResultRange::UseIterator &ResultRange::UseIterator::operator++() {
|
|
|
|
// We increment over uses, if we reach the last use then move to next
|
|
|
|
// result.
|
|
|
|
if (use != (*it).use_end())
|
|
|
|
++use;
|
|
|
|
if (use == (*it).use_end()) {
|
|
|
|
++it;
|
|
|
|
skipOverResultsWithNoUsers();
|
|
|
|
}
|
|
|
|
return *this;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ResultRange::UseIterator::skipOverResultsWithNoUsers() {
|
|
|
|
while (it != endIt && (*it).use_empty())
|
|
|
|
++it;
|
|
|
|
|
|
|
|
// If we are at the last result, then set use to first use of
|
|
|
|
// first result (sentinel value used for end).
|
|
|
|
if (it == endIt)
|
|
|
|
use = {};
|
|
|
|
else
|
|
|
|
use = (*it).use_begin();
|
|
|
|
}
|
|
|
|
|
2021-10-27 10:00:10 +08:00
|
|
|
void ResultRange::replaceAllUsesWith(Operation *op) {
|
|
|
|
replaceAllUsesWith(op->getResults());
|
|
|
|
}
|
|
|
|
|
2019-12-11 05:20:50 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// ValueRange
|
|
|
|
|
2019-12-24 04:36:20 +08:00
|
|
|
ValueRange::ValueRange(ArrayRef<Value> values)
|
2019-12-11 05:20:50 +08:00
|
|
|
: ValueRange(values.data(), values.size()) {}
|
|
|
|
ValueRange::ValueRange(OperandRange values)
|
|
|
|
: ValueRange(values.begin().getBase(), values.size()) {}
|
|
|
|
ValueRange::ValueRange(ResultRange values)
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
: ValueRange(values.getBase(), values.size()) {}
|
2019-12-11 05:20:50 +08:00
|
|
|
|
2020-04-15 05:53:07 +08:00
|
|
|
/// See `llvm::detail::indexed_accessor_range_base` for details.
|
2019-12-11 05:20:50 +08:00
|
|
|
ValueRange::OwnerT ValueRange::offset_base(const OwnerT &owner,
|
|
|
|
ptrdiff_t index) {
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
if (const auto *value = owner.dyn_cast<const Value *>())
|
2020-01-03 06:28:37 +08:00
|
|
|
return {value + index};
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
if (auto *operand = owner.dyn_cast<OpOperand *>())
|
2020-01-03 06:28:37 +08:00
|
|
|
return {operand + index};
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
return owner.get<detail::OpResultImpl *>()->getNextResultAtOffset(index);
|
2019-12-11 05:20:50 +08:00
|
|
|
}
|
2020-04-15 05:53:07 +08:00
|
|
|
/// See `llvm::detail::indexed_accessor_range_base` for details.
|
2019-12-24 04:36:20 +08:00
|
|
|
Value ValueRange::dereference_iterator(const OwnerT &owner, ptrdiff_t index) {
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
if (const auto *value = owner.dyn_cast<const Value *>())
|
2020-01-03 06:28:37 +08:00
|
|
|
return value[index];
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
if (auto *operand = owner.dyn_cast<OpOperand *>())
|
2019-12-11 05:20:50 +08:00
|
|
|
return operand[index].get();
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
return owner.get<detail::OpResultImpl *>()->getNextResultAtOffset(index);
|
2019-12-11 05:20:50 +08:00
|
|
|
}
|
2020-04-30 07:09:20 +08:00
|
|
|
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// Operation Equivalency
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2021-07-29 13:01:04 +08:00
|
|
|
llvm::hash_code OperationEquivalence::computeHash(
|
|
|
|
Operation *op, function_ref<llvm::hash_code(Value)> hashOperands,
|
|
|
|
function_ref<llvm::hash_code(Value)> hashResults, Flags flags) {
|
2020-04-30 07:09:20 +08:00
|
|
|
// Hash operations based upon their:
|
|
|
|
// - Operation Name
|
|
|
|
// - Attributes
|
|
|
|
// - Result Types
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
llvm::hash_code hash = llvm::hash_combine(
|
|
|
|
op->getName(), op->getAttrDictionary(), op->getResultTypes());
|
2020-04-30 07:09:20 +08:00
|
|
|
|
|
|
|
// - Operands
|
2021-07-29 13:01:04 +08:00
|
|
|
for (Value operand : op->getOperands())
|
|
|
|
hash = llvm::hash_combine(hash, hashOperands(operand));
|
|
|
|
// - Operands
|
|
|
|
for (Value result : op->getResults())
|
|
|
|
hash = llvm::hash_combine(hash, hashResults(result));
|
2020-05-05 10:54:36 +08:00
|
|
|
return hash;
|
2020-04-30 07:09:20 +08:00
|
|
|
}
|
|
|
|
|
2021-07-29 13:01:04 +08:00
|
|
|
static bool
|
|
|
|
isRegionEquivalentTo(Region *lhs, Region *rhs,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapOperands,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapResults,
|
|
|
|
OperationEquivalence::Flags flags) {
|
|
|
|
DenseMap<Block *, Block *> blocksMap;
|
|
|
|
auto blocksEquivalent = [&](Block &lBlock, Block &rBlock) {
|
|
|
|
// Check block arguments.
|
|
|
|
if (lBlock.getNumArguments() != rBlock.getNumArguments())
|
|
|
|
return false;
|
|
|
|
|
|
|
|
// Map the two blocks.
|
|
|
|
auto insertion = blocksMap.insert({&lBlock, &rBlock});
|
|
|
|
if (insertion.first->getSecond() != &rBlock)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
for (auto argPair :
|
|
|
|
llvm::zip(lBlock.getArguments(), rBlock.getArguments())) {
|
|
|
|
Value curArg = std::get<0>(argPair);
|
|
|
|
Value otherArg = std::get<1>(argPair);
|
|
|
|
if (curArg.getType() != otherArg.getType())
|
|
|
|
return false;
|
|
|
|
if (!(flags & OperationEquivalence::IgnoreLocations) &&
|
|
|
|
curArg.getLoc() != otherArg.getLoc())
|
|
|
|
return false;
|
|
|
|
// Check if this value was already mapped to another value.
|
|
|
|
if (failed(mapOperands(curArg, otherArg)))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
auto opsEquivalent = [&](Operation &lOp, Operation &rOp) {
|
|
|
|
// Check for op equality (recursively).
|
|
|
|
if (!OperationEquivalence::isEquivalentTo(&lOp, &rOp, mapOperands,
|
|
|
|
mapResults, flags))
|
|
|
|
return false;
|
|
|
|
// Check successor mapping.
|
|
|
|
for (auto successorsPair :
|
|
|
|
llvm::zip(lOp.getSuccessors(), rOp.getSuccessors())) {
|
|
|
|
Block *curSuccessor = std::get<0>(successorsPair);
|
|
|
|
Block *otherSuccessor = std::get<1>(successorsPair);
|
|
|
|
auto insertion = blocksMap.insert({curSuccessor, otherSuccessor});
|
|
|
|
if (insertion.first->getSecond() != otherSuccessor)
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
};
|
|
|
|
return llvm::all_of_zip(lBlock, rBlock, opsEquivalent);
|
|
|
|
};
|
|
|
|
return llvm::all_of_zip(*lhs, *rhs, blocksEquivalent);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool OperationEquivalence::isEquivalentTo(
|
|
|
|
Operation *lhs, Operation *rhs,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapOperands,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapResults, Flags flags) {
|
2020-04-30 07:09:20 +08:00
|
|
|
if (lhs == rhs)
|
|
|
|
return true;
|
|
|
|
|
2021-07-29 13:01:04 +08:00
|
|
|
// Compare the operation properties.
|
|
|
|
if (lhs->getName() != rhs->getName() ||
|
|
|
|
lhs->getAttrDictionary() != rhs->getAttrDictionary() ||
|
|
|
|
lhs->getNumRegions() != rhs->getNumRegions() ||
|
|
|
|
lhs->getNumSuccessors() != rhs->getNumSuccessors() ||
|
|
|
|
lhs->getNumOperands() != rhs->getNumOperands() ||
|
|
|
|
lhs->getNumResults() != rhs->getNumResults())
|
2020-04-30 07:09:20 +08:00
|
|
|
return false;
|
2021-07-29 13:01:04 +08:00
|
|
|
if (!(flags & IgnoreLocations) && lhs->getLoc() != rhs->getLoc())
|
2020-04-30 07:09:20 +08:00
|
|
|
return false;
|
2021-07-29 13:01:04 +08:00
|
|
|
|
|
|
|
auto checkValueRangeMapping =
|
|
|
|
[](ValueRange lhs, ValueRange rhs,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapValues) {
|
|
|
|
for (auto operandPair : llvm::zip(lhs, rhs)) {
|
|
|
|
Value curArg = std::get<0>(operandPair);
|
|
|
|
Value otherArg = std::get<1>(operandPair);
|
|
|
|
if (curArg.getType() != otherArg.getType())
|
|
|
|
return false;
|
|
|
|
if (failed(mapValues(curArg, otherArg)))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
};
|
|
|
|
// Check mapping of operands and results.
|
|
|
|
if (!checkValueRangeMapping(lhs->getOperands(), rhs->getOperands(),
|
|
|
|
mapOperands))
|
2020-04-30 07:09:20 +08:00
|
|
|
return false;
|
2021-07-29 13:01:04 +08:00
|
|
|
if (!checkValueRangeMapping(lhs->getResults(), rhs->getResults(), mapResults))
|
2020-04-30 07:09:20 +08:00
|
|
|
return false;
|
2021-07-29 13:01:04 +08:00
|
|
|
for (auto regionPair : llvm::zip(lhs->getRegions(), rhs->getRegions()))
|
|
|
|
if (!isRegionEquivalentTo(&std::get<0>(regionPair),
|
|
|
|
&std::get<1>(regionPair), mapOperands, mapResults,
|
|
|
|
flags))
|
|
|
|
return false;
|
|
|
|
return true;
|
2020-04-30 07:09:20 +08:00
|
|
|
}
|