Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
//===- OperationSupport.cpp -----------------------------------------------===//
|
|
|
|
//
|
2020-01-26 11:58:30 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
2019-12-24 01:35:36 +08:00
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
//
|
2019-12-24 01:35:36 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
//
|
|
|
|
// This file contains out-of-line implementations of the support types that
|
2019-03-27 05:45:38 +08:00
|
|
|
// Operation and related classes build on top of.
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#include "mlir/IR/OperationSupport.h"
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
#include "mlir/IR/BuiltinAttributes.h"
|
2020-12-04 09:22:29 +08:00
|
|
|
#include "mlir/IR/BuiltinTypes.h"
|
2020-04-30 07:09:11 +08:00
|
|
|
#include "mlir/IR/OpDefinition.h"
|
2021-03-10 07:02:03 +08:00
|
|
|
#include "llvm/ADT/BitVector.h"
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
#include <numeric>
|
2021-03-10 07:02:03 +08:00
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
using namespace mlir;
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2020-05-07 04:48:36 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// NamedAttrList
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
NamedAttrList::NamedAttrList(ArrayRef<NamedAttribute> attributes) {
|
|
|
|
assign(attributes.begin(), attributes.end());
|
|
|
|
}
|
|
|
|
|
2020-12-18 09:10:12 +08:00
|
|
|
NamedAttrList::NamedAttrList(DictionaryAttr attributes)
|
|
|
|
: NamedAttrList(attributes ? attributes.getValue()
|
|
|
|
: ArrayRef<NamedAttribute>()) {
|
|
|
|
dictionarySorted.setPointerAndInt(attributes, true);
|
|
|
|
}
|
|
|
|
|
2020-05-07 04:48:36 +08:00
|
|
|
NamedAttrList::NamedAttrList(const_iterator in_start, const_iterator in_end) {
|
|
|
|
assign(in_start, in_end);
|
|
|
|
}
|
|
|
|
|
|
|
|
ArrayRef<NamedAttribute> NamedAttrList::getAttrs() const { return attrs; }
|
|
|
|
|
2020-11-04 06:31:23 +08:00
|
|
|
Optional<NamedAttribute> NamedAttrList::findDuplicate() const {
|
|
|
|
Optional<NamedAttribute> duplicate =
|
|
|
|
DictionaryAttr::findDuplicate(attrs, isSorted());
|
|
|
|
// DictionaryAttr::findDuplicate will sort the list, so reset the sorted
|
|
|
|
// state.
|
|
|
|
if (!isSorted())
|
|
|
|
dictionarySorted.setPointerAndInt(nullptr, true);
|
|
|
|
return duplicate;
|
|
|
|
}
|
|
|
|
|
2020-05-07 04:48:36 +08:00
|
|
|
DictionaryAttr NamedAttrList::getDictionary(MLIRContext *context) const {
|
|
|
|
if (!isSorted()) {
|
|
|
|
DictionaryAttr::sortInPlace(attrs);
|
|
|
|
dictionarySorted.setPointerAndInt(nullptr, true);
|
|
|
|
}
|
|
|
|
if (!dictionarySorted.getPointer())
|
2021-03-05 04:37:32 +08:00
|
|
|
dictionarySorted.setPointer(DictionaryAttr::getWithSorted(context, attrs));
|
2020-05-07 04:48:36 +08:00
|
|
|
return dictionarySorted.getPointer().cast<DictionaryAttr>();
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Add an attribute with the specified name.
|
|
|
|
void NamedAttrList::append(StringRef name, Attribute attr) {
|
|
|
|
append(Identifier::get(name, attr.getContext()), attr);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Replaces the attributes with new list of attributes.
|
|
|
|
void NamedAttrList::assign(const_iterator in_start, const_iterator in_end) {
|
|
|
|
DictionaryAttr::sort(ArrayRef<NamedAttribute>{in_start, in_end}, attrs);
|
|
|
|
dictionarySorted.setPointerAndInt(nullptr, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
void NamedAttrList::push_back(NamedAttribute newAttribute) {
|
2021-08-24 01:24:12 +08:00
|
|
|
assert(newAttribute.second && "unexpected null attribute");
|
2020-05-07 04:48:36 +08:00
|
|
|
if (isSorted())
|
|
|
|
dictionarySorted.setInt(
|
|
|
|
attrs.empty() ||
|
|
|
|
strcmp(attrs.back().first.data(), newAttribute.first.data()) < 0);
|
|
|
|
dictionarySorted.setPointer(nullptr);
|
|
|
|
attrs.push_back(newAttribute);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Helper function to find attribute in possible sorted vector of
|
|
|
|
/// NamedAttributes.
|
|
|
|
template <typename T>
|
|
|
|
static auto *findAttr(SmallVectorImpl<NamedAttribute> &attrs, T name,
|
|
|
|
bool sorted) {
|
|
|
|
if (!sorted) {
|
|
|
|
return llvm::find_if(
|
|
|
|
attrs, [name](NamedAttribute attr) { return attr.first == name; });
|
|
|
|
}
|
|
|
|
|
|
|
|
auto *it = llvm::lower_bound(attrs, name);
|
2020-05-18 21:33:35 +08:00
|
|
|
if (it == attrs.end() || it->first != name)
|
2020-05-07 04:48:36 +08:00
|
|
|
return attrs.end();
|
|
|
|
return it;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Return the specified attribute if present, null otherwise.
|
|
|
|
Attribute NamedAttrList::get(StringRef name) const {
|
|
|
|
auto *it = findAttr(attrs, name, isSorted());
|
|
|
|
return it != attrs.end() ? it->second : nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Return the specified attribute if present, null otherwise.
|
|
|
|
Attribute NamedAttrList::get(Identifier name) const {
|
|
|
|
auto *it = findAttr(attrs, name, isSorted());
|
|
|
|
return it != attrs.end() ? it->second : nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Return the specified named attribute if present, None otherwise.
|
|
|
|
Optional<NamedAttribute> NamedAttrList::getNamed(StringRef name) const {
|
|
|
|
auto *it = findAttr(attrs, name, isSorted());
|
|
|
|
return it != attrs.end() ? *it : Optional<NamedAttribute>();
|
|
|
|
}
|
|
|
|
Optional<NamedAttribute> NamedAttrList::getNamed(Identifier name) const {
|
|
|
|
auto *it = findAttr(attrs, name, isSorted());
|
|
|
|
return it != attrs.end() ? *it : Optional<NamedAttribute>();
|
|
|
|
}
|
|
|
|
|
|
|
|
/// If the an attribute exists with the specified name, change it to the new
|
|
|
|
/// value. Otherwise, add a new attribute with the specified name/value.
|
2020-12-18 09:10:12 +08:00
|
|
|
Attribute NamedAttrList::set(Identifier name, Attribute value) {
|
2020-05-07 04:48:36 +08:00
|
|
|
assert(value && "attributes may never be null");
|
|
|
|
|
|
|
|
// Look for an existing value for the given name, and set it in-place.
|
|
|
|
auto *it = findAttr(attrs, name, isSorted());
|
|
|
|
if (it != attrs.end()) {
|
2020-12-18 09:10:12 +08:00
|
|
|
// Only update if the value is different from the existing.
|
|
|
|
Attribute oldValue = it->second;
|
|
|
|
if (oldValue != value) {
|
|
|
|
dictionarySorted.setPointer(nullptr);
|
|
|
|
it->second = value;
|
|
|
|
}
|
|
|
|
return oldValue;
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Otherwise, insert the new attribute into its sorted position.
|
|
|
|
it = llvm::lower_bound(attrs, name);
|
|
|
|
dictionarySorted.setPointer(nullptr);
|
|
|
|
attrs.insert(it, {name, value});
|
2020-12-18 09:10:12 +08:00
|
|
|
return Attribute();
|
2020-05-07 04:48:36 +08:00
|
|
|
}
|
2020-12-18 09:10:12 +08:00
|
|
|
Attribute NamedAttrList::set(StringRef name, Attribute value) {
|
2020-05-07 04:48:36 +08:00
|
|
|
assert(value && "setting null attribute not supported");
|
|
|
|
return set(mlir::Identifier::get(name, value.getContext()), value);
|
|
|
|
}
|
|
|
|
|
2020-10-27 02:29:28 +08:00
|
|
|
Attribute
|
|
|
|
NamedAttrList::eraseImpl(SmallVectorImpl<NamedAttribute>::iterator it) {
|
|
|
|
if (it == attrs.end())
|
|
|
|
return nullptr;
|
|
|
|
|
|
|
|
// Erasing does not affect the sorted property.
|
|
|
|
Attribute attr = it->second;
|
|
|
|
attrs.erase(it);
|
|
|
|
dictionarySorted.setPointer(nullptr);
|
|
|
|
return attr;
|
|
|
|
}
|
|
|
|
|
|
|
|
Attribute NamedAttrList::erase(Identifier name) {
|
|
|
|
return eraseImpl(findAttr(attrs, name, isSorted()));
|
|
|
|
}
|
|
|
|
|
|
|
|
Attribute NamedAttrList::erase(StringRef name) {
|
|
|
|
return eraseImpl(findAttr(attrs, name, isSorted()));
|
|
|
|
}
|
|
|
|
|
2020-05-07 04:48:36 +08:00
|
|
|
NamedAttrList &
|
|
|
|
NamedAttrList::operator=(const SmallVectorImpl<NamedAttribute> &rhs) {
|
|
|
|
assign(rhs.begin(), rhs.end());
|
|
|
|
return *this;
|
|
|
|
}
|
|
|
|
|
|
|
|
NamedAttrList::operator ArrayRef<NamedAttribute>() const { return attrs; }
|
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// OperationState
|
|
|
|
//===----------------------------------------------------------------------===//
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2019-06-23 02:08:52 +08:00
|
|
|
OperationState::OperationState(Location location, StringRef name)
|
2019-08-27 08:34:06 +08:00
|
|
|
: location(location), name(name, location->getContext()) {}
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2019-06-23 02:08:52 +08:00
|
|
|
OperationState::OperationState(Location location, OperationName name)
|
2019-08-27 08:34:06 +08:00
|
|
|
: location(location), name(name) {}
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
|
2019-06-23 02:08:52 +08:00
|
|
|
OperationState::OperationState(Location location, StringRef name,
|
2020-09-23 12:06:25 +08:00
|
|
|
ValueRange operands, TypeRange types,
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
ArrayRef<NamedAttribute> attributes,
|
2020-09-23 12:06:25 +08:00
|
|
|
BlockRange successors,
|
2020-04-27 12:28:22 +08:00
|
|
|
MutableArrayRef<std::unique_ptr<Region>> regions)
|
2019-08-27 08:34:06 +08:00
|
|
|
: location(location), name(name, location->getContext()),
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
operands(operands.begin(), operands.end()),
|
|
|
|
types(types.begin(), types.end()),
|
|
|
|
attributes(attributes.begin(), attributes.end()),
|
|
|
|
successors(successors.begin(), successors.end()) {
|
2019-08-27 08:34:06 +08:00
|
|
|
for (std::unique_ptr<Region> &r : regions)
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
this->regions.push_back(std::move(r));
|
|
|
|
}
|
|
|
|
|
2019-12-08 02:35:01 +08:00
|
|
|
void OperationState::addOperands(ValueRange newOperands) {
|
|
|
|
operands.append(newOperands.begin(), newOperands.end());
|
|
|
|
}
|
|
|
|
|
2020-09-03 06:33:19 +08:00
|
|
|
void OperationState::addSuccessors(BlockRange newSuccessors) {
|
2020-03-06 04:48:28 +08:00
|
|
|
successors.append(newSuccessors.begin(), newSuccessors.end());
|
2019-12-08 02:35:01 +08:00
|
|
|
}
|
|
|
|
|
Allow creating standalone Regions
Currently, regions can only be constructed by passing in a `Function` or an
`Instruction` pointer referencing the parent object, unlike `Function`s or
`Instruction`s themselves that can be created without a parent. It leads to a
rather complex flow in operation construction where one has to create the
operation first before being able to work with its regions. It may be
necessary to work with the regions before the operation is created. In
particular, in `build` and `parse` functions that are executed _before_ the
operation is created in cases where boilerplate region manipulation is required
(for example, inserting the hypothetical default terminator in affine regions).
Allow creating standalone regions. Such regions are meant to own a list of
blocks and transfer them to other regions on demand.
Each instruction stores a fixed number of regions as trailing objects and has
ownership of them. This decreases the size of the Instruction object for the
common case of instructions without regions. Keep this behavior intact. To
allow some flexibility in construction, make OperationState store an owning
vector of regions. When the Builder creates an Instruction from
OperationState, the bodies of the regions are transferred into the
instruction-owned regions to minimize copying. Thus, it becomes possible to
fill standalone regions with blocks and move them to an operation when it is
constructed, or move blocks from a region to an operation region, e.g., for
inlining.
PiperOrigin-RevId: 240368183
2019-03-27 00:55:06 +08:00
|
|
|
Region *OperationState::addRegion() {
|
|
|
|
regions.emplace_back(new Region);
|
|
|
|
return regions.back().get();
|
|
|
|
}
|
|
|
|
|
|
|
|
void OperationState::addRegion(std::unique_ptr<Region> &®ion) {
|
|
|
|
regions.push_back(std::move(region));
|
|
|
|
}
|
|
|
|
|
2020-09-01 03:33:55 +08:00
|
|
|
void OperationState::addRegions(
|
|
|
|
MutableArrayRef<std::unique_ptr<Region>> regions) {
|
|
|
|
for (std::unique_ptr<Region> ®ion : regions)
|
|
|
|
addRegion(std::move(region));
|
|
|
|
}
|
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// OperandStorage
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2020-04-27 12:28:22 +08:00
|
|
|
detail::OperandStorage::OperandStorage(Operation *owner, ValueRange values)
|
2021-05-07 03:09:16 +08:00
|
|
|
: inlineStorage() {
|
2020-04-27 12:28:22 +08:00
|
|
|
auto &inlineStorage = getInlineStorage();
|
|
|
|
inlineStorage.numOperands = inlineStorage.capacity = values.size();
|
|
|
|
auto *operandPtrBegin = getTrailingObjects<OpOperand>();
|
|
|
|
for (unsigned i = 0, e = inlineStorage.numOperands; i < e; ++i)
|
|
|
|
new (&operandPtrBegin[i]) OpOperand(owner, values[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
detail::OperandStorage::~OperandStorage() {
|
|
|
|
// Destruct the current storage container.
|
|
|
|
if (isDynamicStorage()) {
|
|
|
|
TrailingOperandStorage &storage = getDynamicStorage();
|
|
|
|
storage.~TrailingOperandStorage();
|
2021-07-22 07:16:20 +08:00
|
|
|
// Work around -Wfree-nonheap-object false positive fixed by D102728.
|
|
|
|
auto *mem = &storage;
|
|
|
|
free(mem);
|
2020-04-27 12:28:22 +08:00
|
|
|
} else {
|
|
|
|
getInlineStorage().~TrailingOperandStorage();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
/// Replace the operands contained in the storage with the ones provided in
|
2020-04-27 12:28:22 +08:00
|
|
|
/// 'values'.
|
|
|
|
void detail::OperandStorage::setOperands(Operation *owner, ValueRange values) {
|
|
|
|
MutableArrayRef<OpOperand> storageOperands = resize(owner, values.size());
|
|
|
|
for (unsigned i = 0, e = values.size(); i != e; ++i)
|
|
|
|
storageOperands[i].set(values[i]);
|
|
|
|
}
|
|
|
|
|
2020-04-30 07:09:11 +08:00
|
|
|
/// Replace the operands beginning at 'start' and ending at 'start' + 'length'
|
|
|
|
/// with the ones provided in 'operands'. 'operands' may be smaller or larger
|
|
|
|
/// than the range pointed to by 'start'+'length'.
|
|
|
|
void detail::OperandStorage::setOperands(Operation *owner, unsigned start,
|
|
|
|
unsigned length, ValueRange operands) {
|
|
|
|
// If the new size is the same, we can update inplace.
|
|
|
|
unsigned newSize = operands.size();
|
|
|
|
if (newSize == length) {
|
|
|
|
MutableArrayRef<OpOperand> storageOperands = getOperands();
|
|
|
|
for (unsigned i = 0, e = length; i != e; ++i)
|
|
|
|
storageOperands[start + i].set(operands[i]);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
// If the new size is greater, remove the extra operands and set the rest
|
|
|
|
// inplace.
|
|
|
|
if (newSize < length) {
|
|
|
|
eraseOperands(start + operands.size(), length - newSize);
|
|
|
|
setOperands(owner, start, newSize, operands);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
// Otherwise, the new size is greater so we need to grow the storage.
|
|
|
|
auto storageOperands = resize(owner, size() + (newSize - length));
|
|
|
|
|
|
|
|
// Shift operands to the right to make space for the new operands.
|
|
|
|
unsigned rotateSize = storageOperands.size() - (start + length);
|
|
|
|
auto rbegin = storageOperands.rbegin();
|
|
|
|
std::rotate(rbegin, std::next(rbegin, newSize - length), rbegin + rotateSize);
|
|
|
|
|
|
|
|
// Update the operands inplace.
|
|
|
|
for (unsigned i = 0, e = operands.size(); i != e; ++i)
|
|
|
|
storageOperands[start + i].set(operands[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Erase an operand held by the storage.
|
|
|
|
void detail::OperandStorage::eraseOperands(unsigned start, unsigned length) {
|
|
|
|
TrailingOperandStorage &storage = getStorage();
|
|
|
|
MutableArrayRef<OpOperand> operands = storage.getOperands();
|
|
|
|
assert((start + length) <= operands.size());
|
|
|
|
storage.numOperands -= length;
|
|
|
|
|
|
|
|
// Shift all operands down if the operand to remove is not at the end.
|
|
|
|
if (start != storage.numOperands) {
|
2020-05-07 04:48:36 +08:00
|
|
|
auto *indexIt = std::next(operands.begin(), start);
|
2020-04-30 07:09:11 +08:00
|
|
|
std::rotate(indexIt, std::next(indexIt, length), operands.end());
|
|
|
|
}
|
|
|
|
for (unsigned i = 0; i != length; ++i)
|
|
|
|
operands[storage.numOperands + i].~OpOperand();
|
|
|
|
}
|
|
|
|
|
2021-03-10 07:02:03 +08:00
|
|
|
void detail::OperandStorage::eraseOperands(
|
|
|
|
const llvm::BitVector &eraseIndices) {
|
|
|
|
TrailingOperandStorage &storage = getStorage();
|
|
|
|
MutableArrayRef<OpOperand> operands = storage.getOperands();
|
|
|
|
assert(eraseIndices.size() == operands.size());
|
|
|
|
|
|
|
|
// Check that at least one operand is erased.
|
|
|
|
int firstErasedIndice = eraseIndices.find_first();
|
|
|
|
if (firstErasedIndice == -1)
|
|
|
|
return;
|
|
|
|
|
|
|
|
// Shift all of the removed operands to the end, and destroy them.
|
|
|
|
storage.numOperands = firstErasedIndice;
|
|
|
|
for (unsigned i = firstErasedIndice + 1, e = operands.size(); i < e; ++i)
|
|
|
|
if (!eraseIndices.test(i))
|
|
|
|
operands[storage.numOperands++] = std::move(operands[i]);
|
|
|
|
for (OpOperand &operand : operands.drop_front(storage.numOperands))
|
|
|
|
operand.~OpOperand();
|
|
|
|
}
|
|
|
|
|
2020-04-27 12:28:22 +08:00
|
|
|
/// Resize the storage to the given size. Returns the array containing the new
|
|
|
|
/// operands.
|
|
|
|
MutableArrayRef<OpOperand> detail::OperandStorage::resize(Operation *owner,
|
|
|
|
unsigned newSize) {
|
|
|
|
TrailingOperandStorage &storage = getStorage();
|
|
|
|
|
2019-03-27 05:45:38 +08:00
|
|
|
// If the number of operands is less than or equal to the current amount, we
|
|
|
|
// can just update in place.
|
2020-04-27 12:28:22 +08:00
|
|
|
unsigned &numOperands = storage.numOperands;
|
|
|
|
MutableArrayRef<OpOperand> operands = storage.getOperands();
|
|
|
|
if (newSize <= numOperands) {
|
|
|
|
// If the number of new size is less than the current, remove any extra
|
|
|
|
// operands.
|
|
|
|
for (unsigned i = newSize; i != numOperands; ++i)
|
|
|
|
operands[i].~OpOperand();
|
|
|
|
numOperands = newSize;
|
|
|
|
return operands.take_front(newSize);
|
2019-03-27 05:45:38 +08:00
|
|
|
}
|
|
|
|
|
2020-04-27 12:28:22 +08:00
|
|
|
// If the new size is within the original inline capacity, grow inplace.
|
|
|
|
if (newSize <= storage.capacity) {
|
|
|
|
OpOperand *opBegin = operands.data();
|
|
|
|
for (unsigned e = newSize; numOperands != e; ++numOperands)
|
|
|
|
new (&opBegin[numOperands]) OpOperand(owner);
|
|
|
|
return MutableArrayRef<OpOperand>(opBegin, newSize);
|
2020-04-27 12:28:11 +08:00
|
|
|
}
|
2019-03-27 05:45:38 +08:00
|
|
|
|
2020-04-27 12:28:22 +08:00
|
|
|
// Otherwise, we need to allocate a new storage.
|
|
|
|
unsigned newCapacity =
|
|
|
|
std::max(unsigned(llvm::NextPowerOf2(storage.capacity + 2)), newSize);
|
|
|
|
auto *newStorageMem =
|
|
|
|
malloc(TrailingOperandStorage::totalSizeToAlloc<OpOperand>(newCapacity));
|
|
|
|
auto *newStorage = ::new (newStorageMem) TrailingOperandStorage();
|
|
|
|
newStorage->numOperands = newSize;
|
|
|
|
newStorage->capacity = newCapacity;
|
|
|
|
|
|
|
|
// Move the current operands to the new storage.
|
|
|
|
MutableArrayRef<OpOperand> newOperands = newStorage->getOperands();
|
|
|
|
std::uninitialized_copy(std::make_move_iterator(operands.begin()),
|
|
|
|
std::make_move_iterator(operands.end()),
|
|
|
|
newOperands.begin());
|
|
|
|
|
|
|
|
// Destroy the original operands.
|
|
|
|
for (auto &operand : operands)
|
|
|
|
operand.~OpOperand();
|
|
|
|
|
|
|
|
// Initialize any new operands.
|
|
|
|
for (unsigned e = newSize; numOperands != e; ++numOperands)
|
|
|
|
new (&newOperands[numOperands]) OpOperand(owner);
|
|
|
|
|
|
|
|
// If the current storage is also dynamic, free it.
|
2021-07-22 07:16:20 +08:00
|
|
|
if (isDynamicStorage()) {
|
|
|
|
// Work around -Wfree-nonheap-object false positive fixed by D102728.
|
|
|
|
auto *mem = &storage;
|
|
|
|
free(mem);
|
|
|
|
}
|
2020-04-27 12:28:22 +08:00
|
|
|
|
|
|
|
// Update the storage representation to use the new dynamic storage.
|
2021-05-07 03:09:16 +08:00
|
|
|
dynamicStorage.setPointerAndInt(newStorage, true);
|
2020-04-27 12:28:22 +08:00
|
|
|
return newOperands;
|
2019-03-27 05:45:38 +08:00
|
|
|
}
|
|
|
|
|
2019-12-11 05:20:50 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// Operation Value-Iterators
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2020-02-19 03:36:53 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
2019-12-11 05:20:50 +08:00
|
|
|
// OperandRange
|
|
|
|
|
|
|
|
OperandRange::OperandRange(Operation *op)
|
|
|
|
: OperandRange(op->getOpOperands().data(), op->getNumOperands()) {}
|
|
|
|
|
2020-03-06 04:40:23 +08:00
|
|
|
unsigned OperandRange::getBeginOperandIndex() const {
|
|
|
|
assert(!empty() && "range must not be empty");
|
|
|
|
return base->getOperandNumber();
|
|
|
|
}
|
|
|
|
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
OperandRangeRange OperandRange::split(ElementsAttr segmentSizes) const {
|
|
|
|
return OperandRangeRange(*this, segmentSizes);
|
|
|
|
}
|
|
|
|
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// OperandRangeRange
|
|
|
|
|
|
|
|
OperandRangeRange::OperandRangeRange(OperandRange operands,
|
|
|
|
Attribute operandSegments)
|
|
|
|
: OperandRangeRange(OwnerT(operands.getBase(), operandSegments), 0,
|
|
|
|
operandSegments.cast<DenseElementsAttr>().size()) {}
|
|
|
|
|
|
|
|
OperandRange OperandRangeRange::join() const {
|
|
|
|
const OwnerT &owner = getBase();
|
|
|
|
auto sizeData = owner.second.cast<DenseElementsAttr>().getValues<uint32_t>();
|
|
|
|
return OperandRange(owner.first,
|
|
|
|
std::accumulate(sizeData.begin(), sizeData.end(), 0));
|
|
|
|
}
|
|
|
|
|
|
|
|
OperandRange OperandRangeRange::dereference(const OwnerT &object,
|
|
|
|
ptrdiff_t index) {
|
|
|
|
auto sizeData = object.second.cast<DenseElementsAttr>().getValues<uint32_t>();
|
|
|
|
uint32_t startIndex =
|
|
|
|
std::accumulate(sizeData.begin(), sizeData.begin() + index, 0);
|
|
|
|
return OperandRange(object.first + startIndex, *(sizeData.begin() + index));
|
|
|
|
}
|
|
|
|
|
2020-04-30 07:09:11 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// MutableOperandRange
|
|
|
|
|
|
|
|
/// Construct a new mutable range from the given operand, operand start index,
|
|
|
|
/// and range length.
|
|
|
|
MutableOperandRange::MutableOperandRange(
|
|
|
|
Operation *owner, unsigned start, unsigned length,
|
|
|
|
ArrayRef<OperandSegment> operandSegments)
|
|
|
|
: owner(owner), start(start), length(length),
|
|
|
|
operandSegments(operandSegments.begin(), operandSegments.end()) {
|
|
|
|
assert((start + length) <= owner->getNumOperands() && "invalid range");
|
|
|
|
}
|
|
|
|
MutableOperandRange::MutableOperandRange(Operation *owner)
|
|
|
|
: MutableOperandRange(owner, /*start=*/0, owner->getNumOperands()) {}
|
|
|
|
|
2020-04-30 07:09:43 +08:00
|
|
|
/// Slice this range into a sub range, with the additional operand segment.
|
|
|
|
MutableOperandRange
|
|
|
|
MutableOperandRange::slice(unsigned subStart, unsigned subLen,
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
Optional<OperandSegment> segment) const {
|
2020-04-30 07:09:43 +08:00
|
|
|
assert((subStart + subLen) <= length && "invalid sub-range");
|
|
|
|
MutableOperandRange subSlice(owner, start + subStart, subLen,
|
|
|
|
operandSegments);
|
|
|
|
if (segment)
|
|
|
|
subSlice.operandSegments.push_back(*segment);
|
|
|
|
return subSlice;
|
|
|
|
}
|
|
|
|
|
2020-04-30 07:09:11 +08:00
|
|
|
/// Append the given values to the range.
|
|
|
|
void MutableOperandRange::append(ValueRange values) {
|
|
|
|
if (values.empty())
|
|
|
|
return;
|
|
|
|
owner->insertOperands(start + length, values);
|
|
|
|
updateLength(length + values.size());
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Assign this range to the given values.
|
|
|
|
void MutableOperandRange::assign(ValueRange values) {
|
|
|
|
owner->setOperands(start, length, values);
|
|
|
|
if (length != values.size())
|
|
|
|
updateLength(/*newLength=*/values.size());
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Assign the range to the given value.
|
|
|
|
void MutableOperandRange::assign(Value value) {
|
|
|
|
if (length == 1) {
|
|
|
|
owner->setOperand(start, value);
|
|
|
|
} else {
|
|
|
|
owner->setOperands(start, length, value);
|
|
|
|
updateLength(/*newLength=*/1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Erase the operands within the given sub-range.
|
|
|
|
void MutableOperandRange::erase(unsigned subStart, unsigned subLen) {
|
|
|
|
assert((subStart + subLen) <= length && "invalid sub-range");
|
|
|
|
if (length == 0)
|
|
|
|
return;
|
|
|
|
owner->eraseOperands(start + subStart, subLen);
|
|
|
|
updateLength(length - subLen);
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Clear this range and erase all of the operands.
|
|
|
|
void MutableOperandRange::clear() {
|
|
|
|
if (length != 0) {
|
|
|
|
owner->eraseOperands(start, length);
|
|
|
|
updateLength(/*newLength=*/0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Allow implicit conversion to an OperandRange.
|
|
|
|
MutableOperandRange::operator OperandRange() const {
|
|
|
|
return owner->getOperands().slice(start, length);
|
|
|
|
}
|
|
|
|
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
MutableOperandRangeRange
|
|
|
|
MutableOperandRange::split(NamedAttribute segmentSizes) const {
|
|
|
|
return MutableOperandRangeRange(*this, segmentSizes);
|
|
|
|
}
|
|
|
|
|
2020-04-30 07:09:11 +08:00
|
|
|
/// Update the length of this range to the one provided.
|
|
|
|
void MutableOperandRange::updateLength(unsigned newLength) {
|
|
|
|
int32_t diff = int32_t(newLength) - int32_t(length);
|
|
|
|
length = newLength;
|
|
|
|
|
|
|
|
// Update any of the provided segment attributes.
|
|
|
|
for (OperandSegment &segment : operandSegments) {
|
|
|
|
auto attr = segment.second.second.cast<DenseIntElementsAttr>();
|
|
|
|
SmallVector<int32_t, 8> segments(attr.getValues<int32_t>());
|
|
|
|
segments[segment.first] += diff;
|
|
|
|
segment.second.second = DenseIntElementsAttr::get(attr.getType(), segments);
|
|
|
|
owner->setAttr(segment.second.first, segment.second.second);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
[mlir] Add support for VariadicOfVariadic operands
This revision adds native ODS support for VariadicOfVariadic operand
groups. An example of this is the SwitchOp, which has a variadic number
of nested operand ranges for each of the case statements, where the
number of case statements is variadic. Builtin ODS support allows for
generating proper accessors for the nested operand ranges, builder
support, and declarative format support. VariadicOfVariadic operands
are supported by providing a segment attribute to use to store the
operand groups, mapping similarly to the AttrSizedOperand trait
(but with a user defined attribute name).
`build` methods for VariadicOfVariadic operand expect inputs of the
form `ArrayRef<ValueRange>`. Accessors for the variadic ranges
return a new `OperandRangeRange` type, which represents a
contiguous range of `OperandRange`. In the declarative assembly
format, VariadicOfVariadic operands and types are by default
formatted as a comma delimited list of value lists:
`(<value>, <value>), (), (<value>)`.
Differential Revision: https://reviews.llvm.org/D107774
2021-08-24 04:23:09 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// MutableOperandRangeRange
|
|
|
|
|
|
|
|
MutableOperandRangeRange::MutableOperandRangeRange(
|
|
|
|
const MutableOperandRange &operands, NamedAttribute operandSegmentAttr)
|
|
|
|
: MutableOperandRangeRange(
|
|
|
|
OwnerT(operands, operandSegmentAttr), 0,
|
|
|
|
operandSegmentAttr.second.cast<DenseElementsAttr>().size()) {}
|
|
|
|
|
|
|
|
MutableOperandRange MutableOperandRangeRange::join() const {
|
|
|
|
return getBase().first;
|
|
|
|
}
|
|
|
|
|
|
|
|
MutableOperandRangeRange::operator OperandRangeRange() const {
|
|
|
|
return OperandRangeRange(getBase().first,
|
|
|
|
getBase().second.second.cast<DenseElementsAttr>());
|
|
|
|
}
|
|
|
|
|
|
|
|
MutableOperandRange MutableOperandRangeRange::dereference(const OwnerT &object,
|
|
|
|
ptrdiff_t index) {
|
|
|
|
auto sizeData =
|
|
|
|
object.second.second.cast<DenseElementsAttr>().getValues<uint32_t>();
|
|
|
|
uint32_t startIndex =
|
|
|
|
std::accumulate(sizeData.begin(), sizeData.begin() + index, 0);
|
|
|
|
return object.first.slice(
|
|
|
|
startIndex, *(sizeData.begin() + index),
|
|
|
|
MutableOperandRange::OperandSegment(index, object.second));
|
|
|
|
}
|
|
|
|
|
2019-12-11 05:20:50 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// ValueRange
|
|
|
|
|
2019-12-24 04:36:20 +08:00
|
|
|
ValueRange::ValueRange(ArrayRef<Value> values)
|
2019-12-11 05:20:50 +08:00
|
|
|
: ValueRange(values.data(), values.size()) {}
|
|
|
|
ValueRange::ValueRange(OperandRange values)
|
|
|
|
: ValueRange(values.begin().getBase(), values.size()) {}
|
|
|
|
ValueRange::ValueRange(ResultRange values)
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
: ValueRange(values.getBase(), values.size()) {}
|
2019-12-11 05:20:50 +08:00
|
|
|
|
2020-04-15 05:53:07 +08:00
|
|
|
/// See `llvm::detail::indexed_accessor_range_base` for details.
|
2019-12-11 05:20:50 +08:00
|
|
|
ValueRange::OwnerT ValueRange::offset_base(const OwnerT &owner,
|
|
|
|
ptrdiff_t index) {
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
if (const auto *value = owner.dyn_cast<const Value *>())
|
2020-01-03 06:28:37 +08:00
|
|
|
return {value + index};
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
if (auto *operand = owner.dyn_cast<OpOperand *>())
|
2020-01-03 06:28:37 +08:00
|
|
|
return {operand + index};
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
return owner.get<detail::OpResultImpl *>()->getNextResultAtOffset(index);
|
2019-12-11 05:20:50 +08:00
|
|
|
}
|
2020-04-15 05:53:07 +08:00
|
|
|
/// See `llvm::detail::indexed_accessor_range_base` for details.
|
2019-12-24 04:36:20 +08:00
|
|
|
Value ValueRange::dereference_iterator(const OwnerT &owner, ptrdiff_t index) {
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
if (const auto *value = owner.dyn_cast<const Value *>())
|
2020-01-03 06:28:37 +08:00
|
|
|
return value[index];
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
if (auto *operand = owner.dyn_cast<OpOperand *>())
|
2019-12-11 05:20:50 +08:00
|
|
|
return operand[index].get();
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
return owner.get<detail::OpResultImpl *>()->getNextResultAtOffset(index);
|
2019-12-11 05:20:50 +08:00
|
|
|
}
|
2020-04-30 07:09:20 +08:00
|
|
|
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// Operation Equivalency
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2021-07-29 13:01:04 +08:00
|
|
|
llvm::hash_code OperationEquivalence::computeHash(
|
|
|
|
Operation *op, function_ref<llvm::hash_code(Value)> hashOperands,
|
|
|
|
function_ref<llvm::hash_code(Value)> hashResults, Flags flags) {
|
2020-04-30 07:09:20 +08:00
|
|
|
// Hash operations based upon their:
|
|
|
|
// - Operation Name
|
|
|
|
// - Attributes
|
|
|
|
// - Result Types
|
[mlir][IR] Refactor the internal implementation of Value
The current implementation of Value involves a pointer int pair with several different kinds of owners, i.e. BlockArgumentImpl*, Operation *, TrailingOpResult*. This design arose from the desire to save memory overhead for operations that have a very small number of results (generally 0-2). There are, unfortunately, many problematic aspects of the current implementation that make Values difficult to work with or just inefficient.
Operation result types are stored as a separate array on the Operation. This is very inefficient for many reasons: we use TupleType for multiple results, which can lead to huge amounts of memory usage if multi-result operations change types frequently(they do). It also means that simple methods like Value::getType/Value::setType now require complex logic to get to the desired type.
Value only has one pointer bit free, severely limiting the ability to use it in things like PointerUnion/PointerIntPair. Given that we store the kind of a Value along with the "owner" pointer, we only leave one bit free for users of Value. This creates situations where we end up nesting PointerUnions to be able to use Value in one.
As noted above, most of the methods in Value need to branch on at least 3 different cases which is both inefficient, possibly error prone, and verbose. The current storage of results also creates problems for utilities like ValueRange/TypeRange, which want to efficiently store base pointers to ranges (of which Operation* isn't really useful as one).
This revision greatly simplifies the implementation of Value by the introduction of a new ValueImpl class. This class contains all of the state shared between all of the various derived value classes; i.e. the use list, the type, and the kind. This shared implementation class provides several large benefits:
* Most of the methods on value are now branchless, and often one-liners.
* The "kind" of the value is now stored in ValueImpl instead of Value
This frees up all of Value's pointer bits, allowing for users to take full advantage of PointerUnion/PointerIntPair/etc. It also allows for storing more operation results as "inline", 6 now instead of 2, freeing up 1 word per new inline result.
* Operation result types are now stored in the result, instead of a side array
This drops the size of zero-result operations by 1 word. It also removes the memory crushing use of TupleType for operations results (which could lead up to hundreds of megabytes of "dead" TupleTypes in the context). This also allowed restructured ValueRange, making it simpler and one word smaller.
This revision does come with two conceptual downsides:
* Operation::getResultTypes no longer returns an ArrayRef<Type>
This conceptually makes some usages slower, as the iterator increment is slightly more complex.
* OpResult::getOwner is slightly more expensive, as it now requires a little bit of arithmetic
From profiling, neither of the conceptual downsides have resulted in any perceivable hit to performance. Given the advantages of the new design, most compiles are slightly faster.
Differential Revision: https://reviews.llvm.org/D97804
2021-03-04 06:23:14 +08:00
|
|
|
llvm::hash_code hash = llvm::hash_combine(
|
|
|
|
op->getName(), op->getAttrDictionary(), op->getResultTypes());
|
2020-04-30 07:09:20 +08:00
|
|
|
|
|
|
|
// - Operands
|
2021-07-29 13:01:04 +08:00
|
|
|
for (Value operand : op->getOperands())
|
|
|
|
hash = llvm::hash_combine(hash, hashOperands(operand));
|
|
|
|
// - Operands
|
|
|
|
for (Value result : op->getResults())
|
|
|
|
hash = llvm::hash_combine(hash, hashResults(result));
|
2020-05-05 10:54:36 +08:00
|
|
|
return hash;
|
2020-04-30 07:09:20 +08:00
|
|
|
}
|
|
|
|
|
2021-07-29 13:01:04 +08:00
|
|
|
static bool
|
|
|
|
isRegionEquivalentTo(Region *lhs, Region *rhs,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapOperands,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapResults,
|
|
|
|
OperationEquivalence::Flags flags) {
|
|
|
|
DenseMap<Block *, Block *> blocksMap;
|
|
|
|
auto blocksEquivalent = [&](Block &lBlock, Block &rBlock) {
|
|
|
|
// Check block arguments.
|
|
|
|
if (lBlock.getNumArguments() != rBlock.getNumArguments())
|
|
|
|
return false;
|
|
|
|
|
|
|
|
// Map the two blocks.
|
|
|
|
auto insertion = blocksMap.insert({&lBlock, &rBlock});
|
|
|
|
if (insertion.first->getSecond() != &rBlock)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
for (auto argPair :
|
|
|
|
llvm::zip(lBlock.getArguments(), rBlock.getArguments())) {
|
|
|
|
Value curArg = std::get<0>(argPair);
|
|
|
|
Value otherArg = std::get<1>(argPair);
|
|
|
|
if (curArg.getType() != otherArg.getType())
|
|
|
|
return false;
|
|
|
|
if (!(flags & OperationEquivalence::IgnoreLocations) &&
|
|
|
|
curArg.getLoc() != otherArg.getLoc())
|
|
|
|
return false;
|
|
|
|
// Check if this value was already mapped to another value.
|
|
|
|
if (failed(mapOperands(curArg, otherArg)))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
auto opsEquivalent = [&](Operation &lOp, Operation &rOp) {
|
|
|
|
// Check for op equality (recursively).
|
|
|
|
if (!OperationEquivalence::isEquivalentTo(&lOp, &rOp, mapOperands,
|
|
|
|
mapResults, flags))
|
|
|
|
return false;
|
|
|
|
// Check successor mapping.
|
|
|
|
for (auto successorsPair :
|
|
|
|
llvm::zip(lOp.getSuccessors(), rOp.getSuccessors())) {
|
|
|
|
Block *curSuccessor = std::get<0>(successorsPair);
|
|
|
|
Block *otherSuccessor = std::get<1>(successorsPair);
|
|
|
|
auto insertion = blocksMap.insert({curSuccessor, otherSuccessor});
|
|
|
|
if (insertion.first->getSecond() != otherSuccessor)
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
};
|
|
|
|
return llvm::all_of_zip(lBlock, rBlock, opsEquivalent);
|
|
|
|
};
|
|
|
|
return llvm::all_of_zip(*lhs, *rhs, blocksEquivalent);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool OperationEquivalence::isEquivalentTo(
|
|
|
|
Operation *lhs, Operation *rhs,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapOperands,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapResults, Flags flags) {
|
2020-04-30 07:09:20 +08:00
|
|
|
if (lhs == rhs)
|
|
|
|
return true;
|
|
|
|
|
2021-07-29 13:01:04 +08:00
|
|
|
// Compare the operation properties.
|
|
|
|
if (lhs->getName() != rhs->getName() ||
|
|
|
|
lhs->getAttrDictionary() != rhs->getAttrDictionary() ||
|
|
|
|
lhs->getNumRegions() != rhs->getNumRegions() ||
|
|
|
|
lhs->getNumSuccessors() != rhs->getNumSuccessors() ||
|
|
|
|
lhs->getNumOperands() != rhs->getNumOperands() ||
|
|
|
|
lhs->getNumResults() != rhs->getNumResults())
|
2020-04-30 07:09:20 +08:00
|
|
|
return false;
|
2021-07-29 13:01:04 +08:00
|
|
|
if (!(flags & IgnoreLocations) && lhs->getLoc() != rhs->getLoc())
|
2020-04-30 07:09:20 +08:00
|
|
|
return false;
|
2021-07-29 13:01:04 +08:00
|
|
|
|
|
|
|
auto checkValueRangeMapping =
|
|
|
|
[](ValueRange lhs, ValueRange rhs,
|
|
|
|
function_ref<LogicalResult(Value, Value)> mapValues) {
|
|
|
|
for (auto operandPair : llvm::zip(lhs, rhs)) {
|
|
|
|
Value curArg = std::get<0>(operandPair);
|
|
|
|
Value otherArg = std::get<1>(operandPair);
|
|
|
|
if (curArg.getType() != otherArg.getType())
|
|
|
|
return false;
|
|
|
|
if (failed(mapValues(curArg, otherArg)))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
};
|
|
|
|
// Check mapping of operands and results.
|
|
|
|
if (!checkValueRangeMapping(lhs->getOperands(), rhs->getOperands(),
|
|
|
|
mapOperands))
|
2020-04-30 07:09:20 +08:00
|
|
|
return false;
|
2021-07-29 13:01:04 +08:00
|
|
|
if (!checkValueRangeMapping(lhs->getResults(), rhs->getResults(), mapResults))
|
2020-04-30 07:09:20 +08:00
|
|
|
return false;
|
2021-07-29 13:01:04 +08:00
|
|
|
for (auto regionPair : llvm::zip(lhs->getRegions(), rhs->getRegions()))
|
|
|
|
if (!isRegionEquivalentTo(&std::get<0>(regionPair),
|
|
|
|
&std::get<1>(regionPair), mapOperands, mapResults,
|
|
|
|
flags))
|
|
|
|
return false;
|
|
|
|
return true;
|
2020-04-30 07:09:20 +08:00
|
|
|
}
|