Add an option to predicate the epilogue within the kernel instead of
peeling the epilogue. This is a useful option to prevent generating
large amount of code for deep pipeline. This currently require a user
lamdba to implement operation predication.
Differential Revision: https://reviews.llvm.org/D126753
Similar to complex128/complex64, float16 has no direct support
in the ctypes implementation. This fixes the issue by using a
custom F16 type to change the view in and out of MLIR code
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D126928
Since version 0.7 we've added:
* Initial language support for TableGen
* Tweaked syntax highlighting for PDLL
* Added a new command to view intermediate PDLL output
This commit enables providing long-form documentation more seamlessly to the LSP
by revamping decl documentation. For ODS imported constructs, we now also import
descriptions and attach them to decls when possible. For PDLL constructs, the LSP will
now try to provide documentation by parsing the comments directly above the decls
location within the source file. This commit also adds a new parser flag
`enableDocumentation` that gates the import and attachment of ODS documentation,
which is unnecessary in the normal build process (i.e. it should only be used/consumed
by tools).
Differential Revision: https://reviews.llvm.org/D124881
This commit defines a dataflow analysis for integer ranges, which
uses a newly-added InferIntRangeInterface to compute the lower and
upper bounds on the results of an operation from the bounds on the
arguments. The range inference is a flow-insensitive dataflow analysis
that can be used to simplify code, such as by statically identifying
bounds checks that cannot fail in order to eliminate them.
The InferIntRangeInterface has one method, inferResultRanges(), which
takes a vector of inferred ranges for each argument to an op
implementing the interface and a callback allowing the implementation
to define the ranges for each result. These ranges are stored as
ConstantIntRanges, which hold the lower and upper bounds for a
value. Bounds are tracked separately for the signed and unsigned
interpretations of a value, which ensures that the impact of
arithmetic overflows is correctly tracked during the analysis.
The commit also adds a -test-int-range-inference pass to test the
analysis until it is integrated into SCCP or otherwise exposed.
Finally, this commit fixes some bugs relating to the handling of
region iteration arguments and terminators in the data flow analysis
framework.
Depends on D124020
Depends on D124021
Reviewed By: rriddle, Mogball
Differential Revision: https://reviews.llvm.org/D124023
The example was still using the -now- removed sparse_tensor.init_tensor.
Also, I made the input operands of the matrix multiplication sparse too
(since it looks a bit strange to multiply two dense matrices into a sparse).
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D126897
In strict mode, only the new inserted operation is allowed to add to the
worklist. Before this change, it would add the users of a replaced op
and it didn't check if the users are allowed to be pushed into the
worklist
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D126899
Prior to this patch, the lowering of memref.reshape operations to the
LLVM dialect failed if the shape argument had a static shape with
dynamic dimensions. This patch adds the necessary support so that when
the shape argument has dynamic values, the lowering probes the dimension
at runtime to set the size in the `MemRefDescriptor` type. This patch
also computes the stride for dynamic dimensions by deriving it from the
sizes of the inner dimensions.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D126604
These ops complement the tiling/padding transformations by transforming
higher-level named structured operations such as depthwise convolutions into
lower-level and/or generic equivalents that are better handled by some
downstream transformations.
Differential Revision: https://reviews.llvm.org/D126698
This enabled opaque pointers by default in LLVM. The effect of this
is twofold:
* If IR that contains *neither* explicit ptr nor %T* types is passed
to tools, we will now use opaque pointer mode, unless
-opaque-pointers=0 has been explicitly passed.
* Users of LLVM as a library will now default to opaque pointers.
It is possible to opt-out by calling setOpaquePointers(false) on
LLVMContext.
A cmake option to toggle this default will not be provided. Frontends
or other tools that want to (temporarily) keep using typed pointers
should disable opaque pointers via LLVMContext.
Differential Revision: https://reviews.llvm.org/D126689
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.
Differential Revision: https://reviews.llvm.org/D126180
The trick of using an empty token in the `FOREVERY_O` x-macro relies on preprocessor behavior which is only standard since C99 6.10.3/4 and C++11 N3290 16.3/4 (whereas it was undefined behavior up through C++03 16.3/10). Since the `ExecutionEngine/SparseTensorUtils.cpp` file is required to be compile-able under C++98 compatibility mode (unlike the C++11 used elsewhere in MLIR), we shouldn't rely on that behavior.
Also, using a non-empty suffix helps improve uniformity of the API, since all other primary/overhead suffixes are also non-empty. I'm using the suffix `0` since that's the value used by the `SparseTensorEncoding` attribute for indicating the index overhead-type.
Depends On D126720
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126724
Ctlz is an intrinsic in LLVM but does not have equivalent operations in SPIR-V.
Including a decomposition gives an alternative path for these platforms.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D126261
There is no direct ctypes for MLIR's complex (and thus np.complex128
and np.complex64) yet, causing the mlir python binding methods for
memrefs to crash. This revision fixes this by passing complex arrays
as tuples of floats, correcting at the boundaries for the proper view.
NOTE: some of these changes (4 -> 2) were forced by the new "linting"
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D126422
This fillRow(..., 0) is redundant because when the size of the tableau is
consistent, the resize always creates a new row, which is zero-initialized.
Also added asserts throughout to ensure the dimensions of the tableau remain
consistent.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D126709
Fix the warning: comparison of unsigned expression in ‘>= 0’ is always
true.
Reviewed By: kiranchandramohan, shraiysh
Differential Revision: https://reviews.llvm.org/D126784
This reverts commit 9b7193f852.
This is an older branch that was committed by mistake and does not include addressed review comments, an updated version will come next.
By defining the `{primary,overhead}TypeFunctionSuffix` functions via the same x-macros used to generate the runtime library's functions themselves, this helps avoid bugs from typos or things getting out of sync.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D126720
The previous macro definition using `{...}` would fail to compile when the callsite uses a semicolon followed by an else-statement (i.e., `if (...) FATAL(...); else ...;`). Replacing the simple braces with `do{...}while(0)` (n.b., semicolon not included in the macro definition) enables callsites to use the semicolon plus else-statement syntax without problems. The new definition now requires the semicolon at all callsites, but since it was already being called that way nothing changes.
For more explanation, see <https://gcc.gnu.org/onlinedocs/cpp/Swallowing-the-Semicolon.html>
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126514
The primary goal of this change is to define readSparseTensorShape. Whereas the SparseTensorFile class is merely introduced as a way to reduce code duplication along the way.
Depends On D126106
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126233
Remove boilerplate examples and add a text at the dialect level to describe
what kind of operands the operations accept (i.e., scalar, tensor or vector).
Left a shorter sentence describing the input operands for each operation as
this redundancy is convenient when browsing the documentation using the
website.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D126648
This patch supports to convert the llvm intrinsic to the corresponding op. It still leaves some intrinsics to be handled specially.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D126639
The current translation uses the old "ugly"/"raw" form which used PDLValue for the arguments
and results. This commit updates the C++ generation to use the recently added sugar that
allows for directly using the desired types for the arguments and result of PDL functions.
In addition, this commit also properly imports the C++ class for ODS operations, constraints,
and interfaces. This allows for a much more convienent C++ API than previously granted
with the raw/low-level types.
Differential Revision: https://reviews.llvm.org/D124817
We were currently only completing on the first operand because
the completion check was outside of the parse loop.
Differential Revision: https://reviews.llvm.org/D124784
This commit adds a new PDLL specific LSP command, pdll.viewOutput, that
allows for viewing the intermediate outputs of a given PDLL file. The available
intermediate forms currently mirror those in mlir-pdll, namely: AST, MLIR, CPP.
This is extremely useful for a developer of PDLL, as it simplifies various testing,
and is also quite useful for users as they can easily view what is actually being
generated for their PDLL files.
This new command is added to the vscode client, and is available in the right
client context menu of PDLL files, or via the vscode command palette.
Differential Revision: https://reviews.llvm.org/D124783
This allows for the results of operations to be inferred in certain contexts,
and matches the support in PDL for result type inference. The main two
initial circumstances are when used as a replacement of another operation,
or when the operation being created implements InferTypeOpInterface.
Differential Revision: https://reviews.llvm.org/D124782
Python bindings for extensions of the Transform dialect are defined in separate
Python source files that can be imported on-demand, i.e., that are not imported
with the "main" transform dialect. This requires a minor addition to the
ODS-based bindings generator. This approach is consistent with the current
model for downstream projects that are expected to bundle MLIR Python bindings:
such projects can include their custom extensions into the bundle similarly to
how they include their dialects.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D126208
Vectorization is a key transformation to achieve high performance on most
architectures. In the transform dialect, vectorization is implemented as a
parameterizable transform op. It currently applies to a scope of payload IR
delimited by some isolated-from-above op, mainly because several enabling
transformations (such as affine simplification) are needed to perform
vectorization and these transformation would apply to ops other than the "main"
computational payload op. A separate "navigation" transform op that obtains the
isolated-from-above ancestor of an op is introduced in the core transform
dialect. Even though it is currently only useful for vectorization,
isolated-from-above ops are a common anchor for transformations (usually
implemented as passes) that is likely to be reused in the future.
Depends On D126374
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D126542
Add ops to the structured transform extension of the transform dialect that
perform interchange, padding and scalarization on structured ops. Along with
tiling that is already defined, this provides a minimal set of transformations
necessary to build vectorizable code for a single structured op.
Define two helper traits: one that implements TransformOpInterface by applying
a function to each payload op independently and another that provides a simple
"functional-style" producer/consumer list of memory effects for the transform
ops.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D126374
I don't see a point here in the lit tests here since sqrt, mul and other ops
expand as well. I just added "smoke" tests to verify that the conversion works
and does not create any illegal ops.
I will create a patch that adds a simple integration test to
mlir/test/Integration/Dialect/ComplexOps/ that will compare the values.
Differential Revision: https://reviews.llvm.org/D126539
This patch adds support for applying a relation on domain/range of a relation.
Reviewed By: arjunp, ftynse
Differential Revision: https://reviews.llvm.org/D126339
Now that analysis and bufferization are better separated, post-analysis steps are no longer needed. Users can directly interleave analysis and bufferization as needed.
Differential Revision: https://reviews.llvm.org/D126571
The semicolons were introduced in D126105 in order to correct clang-format, but I forgot this file must be compiled as C++98 rather than C++11.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126561
This supports the operation conversion for threadprivate directive. The
support for memref type conversion is not implemented.
Reviewed By: kiranchandramohan, shraiysh
Differential Revision: https://reviews.llvm.org/D124610
This patch fixes the following compiler error:
error: declaration of ‘mlir::LLVM::cconv::CConv mlir::LLVM::detail::CConvAttrStorage::CConv’ changes meaning of ‘CConv’ [-fpermissive]
CConv as a member variable name was shadowing CConv as an enumeration,
hence the compiler error.
Reviewed By: ftynse, alexbatashev
Differential Revision: https://reviews.llvm.org/D126530
This essentially builds an index for the parsed records and record values (fields).
This covers quite a few cases, but is limited by the currently lackluster location
tracking in tablegen. A followup will work on plumbing more locations through
tablegen, which should greatly improve what we can do here.
Differential Revision: https://reviews.llvm.org/D125443
This allows for following links to include files. This support is effectively
identical to the logic in the PDLL language server, and code is shared as
much as possible.
Differential Revision: https://reviews.llvm.org/D125442
This provides a format for externally specifying the include directories
for a source file. The format of the tablegen database is exactly the
same as that for PDLL, namely it includes the absolute source file name and
the set of include directories. The database format is shared to simplify
the infra, and also because the format itself is general enough to share. Even
if we desire to expand in the future to contain the actual compilation command,
nothing there is specific enough that we would need two different formats.
As with PDLL, support for generating the database is added to our mlir_tablegen
cmake command.
Differential Revision: https://reviews.llvm.org/D125441
This patch adds support for Calling Convention attribute in LLVM
dialect, including enums, custom syntax and import from LLVM IR.
Additionally fix import of dso_local attribute.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D126161
This is a followup to D126105 to move functions in SparseTensorUtils.cpp to match their locations in SparseTensorUtils.h
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126106
This change makes the public API of SparseTensorUtils.cpp explicit, whereas before the publicity of these functions was only implicit. Implicit publicity is sufficient for mlir-opt to generate calls to these functions, but it's not enough to enable C/C++ code to call them directly in the usual way (i.e., without going through codegen). Thus, leaving the publicity implicit prevents development of other tools (e.g., microbenchmarks).
In addition this change also marks the functions MLIR_CRUNNERUTILS_EXPORT, which is required by the JIT under certain configurations (albeit not for anything in our test suite).
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126105
The Transform dialect uses the side effect modeling mechanism to record the
effects of the transform ops on the mapping between Transform IR values and
Payload IR ops. Introduce a checker pass that warns if a Transform IR value is
used after it has been freed (consumed). This pass is mostly intended as a
debugging aid in addition to the verification/assertion mechanisms in the
transform interpreter. It reports all potential use-after-free situations.
The implementation makes a series of simplifying assumptions to be simple and
conservative. A more advanced implementation would rely on the data flow-like
analysis associated with a side-effect resource rather than a value, which is
currently not supported by the analysis infrastructure.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D126381
If we don't specify the result index while matching operand with the
result of certain operation, it's supposed to match all the results of
the operation with the operand. For registered op, it's easy to do that
by either indexing with number or name. For unregistered op, this commit
enables the numeric result indexing for this use case.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D126330
This commit fixes `Tensor_InsertSliceOp` `sizes` inputs/attributes
description.
Before this commit, the description says the `sizes` inputs/attributes
denote the size of the return type. But according to the
`InsertSliceOpConstantArgumentFolder` in
`lib/Dialect/Tensor/IR/TensorOps.cpp`, the `sizes` inputs/attributes
actually denote the size of the source type.
I had an off-line discussion with the authors of `TensorOps.td` and
`TensorOps.cpp`. We concluded that it was a typo in the Op description.
This commit updates the Op description to match the actual usage.
Differential Revision: https://reviews.llvm.org/D126264
Using 64-bit integer/float type in interface storage classes would
require Int64/Float64 capability, per the Vulkan spec:
```
shaderInt64 specifies whether 64-bit integers (signed and unsigned) are
supported in shader code. If this feature is not enabled, 64-bit integer
types must not be used in shader code. This also specifies whether
shader modules can declare the Int64 capability. Declaring and using
64-bit integers is enabled for all storage classes that SPIR-V allows
with the Int64 capability.
```
This is different from, say, 16-bit element types, where:
```
shaderInt16 specifies whether 16-bit integers (signed and unsigned) are
supported in shader code. If this feature is not enabled, 16-bit integer
types must not be used in shader code. This also specifies whether
shader modules can declare the Int16 capability. However, this only
enables a subset of the storage classes that SPIR-V allows for the Int16
SPIR-V capability: Declaring and using 16-bit integers in the Private,
Workgroup (for non-Block variables), and Function storage classes is
enabled, while declaring them in the interface storage classes (e.g.,
UniformConstant, Uniform, StorageBuffer, Input, Output, and
PushConstant) is not enabled.
```
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D126256
These attributes can carry useful information, e.g., pipelines
might use them to organize and chain patterns.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D126320
This patch adds support for obtaining a set corresponding to the domain/range
of the relation.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D126326
This patch makes sure that the address dereferences to value in
omp.atomic.write operation.
Reviewed By: kiranchandramohan, peixin
Differential Revision: https://reviews.llvm.org/D126272
This suppresse annoying warning when building mlir.
```
warning: base class ‘class mlir::PassWrapper<{anonymous}::TestStatisticPass,
mlir::OperationPass<void> >’ should be explicitly initialized in the copy constructor [-Wextra]
```
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D126209
This commit breaks down diagnostic string literals so that the attribute
name and enumurator names can be shared with the stringify utility
function and the "expected ", " to be one of ", and ", " can be shared
between different enum-related diagnostic.
Differential Revision: https://reviews.llvm.org/D125938
Add lowering for cases where the reduction dimension is fully unrolled.
It is common to unroll the reduction dimension, therefore we would want
to lower the contractions to an elementwise vector op in this case.
Differential Revision: https://reviews.llvm.org/D126120
The tests actually require the target triple to match the host, rather than just having the host in the list of available targets. This change removes `llvm_has_native_target` and instead uses the `native` feature from the lit configuration.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D126011
TOSAs depthwise_conv2d operation includes a reshape to include the implicit x1 dimension.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D126212
AsyncCopyOp lowering converted "size in elements" to "size in bytes"
assuming the element type size is at least one byte. This removes
that restriction, allowing for types such as i4 and b1 to be handled
correctly.
Differential Revision: https://reviews.llvm.org/D125838
This changes adds missing support for the i4 data type. Tests are added
to ensure proper lowering of an nvgpu.mma.sync operation targeting the
16x8x64xi4 and 16x8x32xi4 MMA variants in the NVVM dialect.
Differential Revision: https://reviews.llvm.org/D126092
Also fixes integration of the pass into One-Shot Bufferize and adds additional test cases.
BufferResultsToOutParams can be used with "identity-layout-map" and "fully-dynamic-layout-map". "infer-layout-map" is not supported.
Differential Revision: https://reviews.llvm.org/D125636
No longer pass static dim sizes as an attribute. This was redundant and required extra checks in the verifier. This change also makes the op symmetrical to memref::AllocOp.
Differential Revision: https://reviews.llvm.org/D126178
This diff modifies `mlir-tblgen` to generate Python Operation class `__init__()`
functions that use Python keyword-only arguments.
Previously, all `__init__()` function arguments were positional. Python code to
create MLIR Operations was required to provide values for ALL builder arguments,
including optional arguments (attributes and operands). Callers that did not
provide, for example, an optional attribute would be forced to provide `None`
as an argument for EACH optional attribute. Proposed changes in this diff use
`tblgen` record information (as provided by ODS) to generate keyword arguments
for:
- optional operands
- optional attributes (which includes unit attributes)
- default-valued attributes
These `__init__()` function keyword arguments have default `None` values (i.e.
the argument form is `optionalAttr=None`), allowing callers to create Operations
more easily.
Note that since optional arguments become keyword-only arguments (since they are
placed after the bare `*` argument), this diff will require ALL optional
operands and attributes to be provided using explicit keyword syntax. This may,
in the short term, break any out-of-tree Python code that provided values via
positional arguments. However, in the long term, it seems that requiring
keywords for optional arguments will be more robust to operation changes that
add arguments.
Tests were modified to reflect the updated Operation builder calling convention.
This diff partially addresses the requests made in the github issue below.
https://github.com/llvm/llvm-project/issues/54932
Reviewed By: stellaraccident, mikeurbach
Differential Revision: https://reviews.llvm.org/D124717
This patch updates asserts in IntegerRelation::isEqual and
IntegerRelation::isCompatible to allow these functions when number of
local identifiers are different. This change is done to reflect the
algorithmic changes done before this patch.
One of the ShuffleVectorOp::build functions checks if the incoming
vector operands is scalable vector by casting its type to
mlir::VectorType first. However, in some cases the operand is not
necessarily mlir::VectorType (e.g. it might be a LLVMVectorType).
This patch fixes this issue by using the dedicated
`LLVM::isScalableVectorType` function to determine if the incoming
vector is scalable vector or not.
Differential Revision: https://reviews.llvm.org/D125818
Add support for translating from llvm::Select, llvm::FNeg, and llvm::Unreachable.
This patch also cleans up (NFC) the opcode map for simple instructions and
adds `// clang-format off/on` comments to prevent those lines from being
churned by clang-format between commits.
Differential Revision: https://reviews.llvm.org/D125817
This change adds a new op `alloc_tensor` to the bufferization dialect. During bufferization, this op is always lowered to a buffer allocation (unless it is "eliminated" by a pre-processing pass). It is useful to have such an op in tensor land, because it allows users to model tensor SSA use-def chains (which drive bufferization decisions) and because tensor SSA use-def chains can be analyzed by One-Shot Bufferize, while memref values cannot.
This change also replaces all uses of linalg.init_tensor in bufferization-related code with bufferization.alloc_tensor.
linalg.init_tensor and bufferization.alloc_tensor are similar, but the purpose of the former one is just to carry a shape. It does not indicate a memory allocation.
linalg.init_tensor is not suitable for modelling SSA use-def chains for bufferization purposes, because linalg.init_tensor is marked as not having side effects (in contrast to alloc_tensor). As such, it is legal to move linalg.init_tensor ops around/CSE them/etc. This is not desirable for alloc_tensor; it represents an explicit buffer allocation while still in tensor land and such allocations should not suddenly disappear or get moved around when running the canonicalizer/CSE/etc.
BEGIN_PUBLIC
No public commit message needed for presubmit.
END_PUBLIC
Differential Revision: https://reviews.llvm.org/D126003
This changes adds the option to lower to NvGpu dialect ops during the
VectorToGPU convsersion pass. Because this transformation reuses
existing VectorToGPU logic, a seperate VectorToNvGpu conversion pass is
not created. The option `use-nvgpu` is added to the VectorToGPU pass.
When this is true, the pass will attempt to convert slices rooted at
`vector.contract` operations into `nvgpu.mma.sync` ops, and
`vector.transfer_read` ops are converted to either `nvgpu.ldmatrix` or
one or more `vector.load` operations. The specific data loaded will
depend on the thread id within a subgroup (warp). These index
calculations depend on data type and shape of the MMA op
according to the downstream PTX specification. The code for supporting
these details is separated into `NvGpuSupport.cpp|h`.
Differential Revision: https://reviews.llvm.org/D122940
For the hypothetical "a.b.c" op printed within a region that declares "a" as
the default dialect, MLIR would currently elide the "a." prefix and only print
"b.c". However, this becomes ambiguous while parsing as "b.c" may be exist as
the "c" op in the "b" dialect. If it does not, the parsing currently fails. Do
not elide the default dialect if the op name contains further dots to avoid the
ambiguity.
See https://discourse.llvm.org/t/dropping-dialect-prefix-for-ops-with-multiple-dots-in-the-name/62562
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D125975
By closing over the `rank` itself rather than `this`, we save a method call on each iteration. A minor optimization, but one that adds up.
Depends On D126016
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126019
Since these are unused, I've removed them from the configuration, so that it can be easier to read and follow.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D125132
The approach I took was to define a dialect 'extern' attribute that a GlobalOp can take as a value to signify external linkage. I think this approach should compose well and should also work with wherever the OpaqueElements work goes in the future (since that is just another kind of attribute). I special cased the GlobalOp parser/printer for this case because it is significantly easier on the eyes.
In the discussion, Jeff Niu had proposed an alternative syntax for GlobalOp that I ended up not taking. I did try to implement it but a) I don't think it made anything easier to read in the common case, and b) it made the parsing/printing logic a lot more complicated (I think I would need a completely custom parser/printer to do it well). Please have a look at the common cases where the global type and initial value type match: I don't think how I have it is too bad. The less common cases seem ok to me.
I chose to only implement the direct, constant load op since that is non side effecting and there was still discussion pending on that.
Differential Revision: https://reviews.llvm.org/D124318