This allows for reusing the same output channel when the extension reloads after updating the server. Currently, whenever the extension restarts a new output channel is created (which can lead to a large number of seemingly dead output channels).
Quite a few things were out-of-date, or just not
organized well. This revision updates the extension
name, repo, icon, and many other components in
preperation for publishing the extension to the
marketplace.
This revision adds detection for changes to either the mlir-lsp-server binary or the setting, and prompts the user to restart the server. Whether the user gets prompted or not is a configurable setting in the extension, and this setting may updated based on the user response to the prompt.
Differential Revision: https://reviews.llvm.org/D104501
Introduce the execute_region op that is able to hold a region which it
executes exactly once. The op encapsulates a CFG within itself while
isolating it from the surrounding control flow. Proposal discussed here:
https://llvm.discourse.group/t/introduce-std-inlined-call-op-proposal/282
execute_region enables one to inline a function without lowering out all
other higher level control flow constructs (affine.for/if, scf.for/if)
to the flat list of blocks / CFG form. It thus allows the benefit of
transforms on higher level control flow ops available in the presence of
the inlined calls. The inlined calls continue to benefit from
propagation of SSA values across their top boundary. Functions won’t
have to remain outlined until later than desired. Abstractions like
affine execute_regions, lambdas with implicit captures could be lowered
to this without first lowering out structured loops/ifs or outlining.
But two potential early use cases are of: (1) an early inliner (which
can inline functions by introducing execute_region ops), (2) lowering of
an affine.execute_region, which cleanly maps to an scf.execute_region
when going from the affine dialect to the scf dialect.
Differential Revision: https://reviews.llvm.org/D75837
This utilizes the mlir-lsp server to provide language services for MLIR files opened in vscode. The extension currently supports syntax highlighting, as well as tracking definitions/uses/source locations for SSA values and blocks.
Differential Revision: https://reviews.llvm.org/D100607
Simple jupyter kernel using mlir-opt and reproducer to run passes.
Useful for local experimentation & generating examples. The export to
markdown from here is not immediately useful nor did I define a
CodeMirror synax to make the HTML output prettier. It only supports one
level of history (e.g., `_`) as I was mostly using with expanding a
pipeline one pass at a time and so was all I needed.
I placed this in utils directory next to editor & debugger utils.
Differential Revision: https://reviews.llvm.org/D95742
Previously we only autogen the availability for ops that are
direct instantiating `SPV_Op` and expected other subclasses of
`SPV_Op` to define aggregated availability for all ops. This is
quite error prone and we can miss capabilities for certain ops.
Also it's arguable to have multiple levels of subclasses and try
to deduplicate too much: having the availability directly in the
op can be quite explicit and clear. A few extra lines of
declarative code is fine.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D95236
Define OrderedOp and UnorderedOp instructions in SPIR-V and convert
cmpf operations with `ord` and `uno` tag to these instructions
respectively.
Differential Revision: https://reviews.llvm.org/D95098
The SPIR-V spec uses OpSpecConstantOp. Using an inconsistent name
makes the dialect generation scripts fail. Update to use the right
operation name, and fix the auto generation scripts as well.
Differential Revision: https://reviews.llvm.org/D95097
This reverts commit 0d48d265db.
This reapplies the following commit, with a fix for CAPI/ir.c:
[mlir] Start splitting the `tensor` dialect out of `std`.
This starts by moving `std.extract_element` to `tensor.extract` (this
mirrors the naming of `vector.extract`).
Curiously, `std.extract_element` supposedly works on vectors as well,
and this patch removes that functionality. I would tend to do that in
separate patch, but I couldn't find any downstream users relying on
this, and the fact that we have `vector.extract` made it seem safe
enough to lump in here.
This also sets up the `tensor` dialect as a dependency of the `std`
dialect, as some ops that currently live in `std` depend on
`tensor.extract` via their canonicalization patterns.
Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2
Differential Revision: https://reviews.llvm.org/D92991
This starts by moving `std.extract_element` to `tensor.extract` (this
mirrors the naming of `vector.extract`).
Curiously, `std.extract_element` supposedly works on vectors as well,
and this patch removes that functionality. I would tend to do that in
separate patch, but I couldn't find any downstream users relying on
this, and the fact that we have `vector.extract` made it seem safe
enough to lump in here.
This also sets up the `tensor` dialect as a dependency of the `std`
dialect, as some ops that currently live in `std` depend on
`tensor.extract` via their canonicalization patterns.
Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2
Differential Revision: https://reviews.llvm.org/D92991
This is part of a larger refactoring the better congregates the builtin structures under the BuiltinDialect. This also removes the problematic "standard" naming that clashes with the "standard" dialect, which is not defined within IR/. A temporary forward is placed in StandardTypes.h to allow time for downstream users to replaced references.
Differential Revision: https://reviews.llvm.org/D92435
The TypeID instance was moved in D89153.
It wasn't caught that it broke MLIR pretty printers because pre-merge checks don't run check-debuginfo.
Avoid disabling all MLIR printers in case this happens again by catching the exception.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D90191
This is the first bit from D73546. Primarily setting up the corresponding test. Will add more pretty printers in a separate revision.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D86937
This patch adds a new cli argument to the automation script to generate
a report of the current SPIRV spec instruction coverage. It dumps to the
standard output a YAML string with the coverage information.
Differential Revision: https://reviews.llvm.org/D82006
This commit changes the separator line for dividing auto-generated
docs from spec and manually added appendix from "### Custom assembly
form" to "<!-- End of AutoGen section -->". This is in preparation
to use the declarative assembly form in MLIR core. We will replace
more and more manually written assembly forms to be autogenerated.
Differential Revision: https://reviews.llvm.org/D77158
In SPIR-V, when a new version is introduced, it is possible some
existing extensions will be incorporated into it so that it becomes
implicitly declared if targeting the new version. This affects
conversion target specification because we need to take this into
account when allowing what extensions to use.
For a capability, it may also implies some other capabilities,
for example, the `Shader` capability implies `Matrix` the capability.
This should also be taken into consideration when preparing the
conversion target: when we specify an capability is allowed, all
its recursively implied capabilities are also allowed.
This commit adds utility functions to query implied extensions for
a given version and implied capabilities for a given capability
and updated SPIRVConversionTarget to use them.
This commit also fixes a bug in availability spec. When a symbol
(op or enum case) can be enabled by an extension, we should drop
it's minimal version requirement. Being enabled by an extension
naturally means the symbol can be used by *any* SPIR-V version
as long as the extension is supported. The grammar still encodes
the 'version' field for such cases, but it should be interpreted
as a different way: rather than meaning a minimal version
requirement, it says the symbol becomes core at that specific
version.
Differential Revision: https://reviews.llvm.org/D72765
By default, for an enum attribute, we will generate a list of equality
comparisons for all supported cases inside it's predicate. This list
can be fairly large for certain SPIR-V enum attributes. Instead, we
already have such a list generated by EnumsGen in the symbolize
functions. Leverage that to simplify the generated C++ code.
Differential Revision: https://reviews.llvm.org/D72763
Certain SPIR-V capabilities are only available in certain SPIR-V versions
or extensions. Also a SPIR-V capability may implicitly declares other
capabilities.
This commit updates gen_spirv_dialect.py to support generating such
information into SPIRVBase.td. It requires us to topologically sort
all capabilities because now a capability can refer to another one.
This commits also registers a few extensions because their symbols are
used by capability availability.
Note that this commit hasn't updated SPIRVConversionTarget to take
into consideration such relationship yet. That will be done in a
following-up commit.
Differential Revision: https://reviews.llvm.org/D72760
This commit updates gen_spirv_dialect.py to query the grammar and
generate availability spec for various enum attribute definitions
and all defined ops.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D72095
This CL updates SPIR-V.md to reflect recent developments
in the SPIR-V dialect and its conversions.
Along the way, also updates the doc for define_inst.sh.
PiperOrigin-RevId: 286933546
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.
Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.
affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#225
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
These come from a non-standard extenion that is not available on Github, so it
only clutters the documentation source with {.mlir} or {.ebnf} tags.
PiperOrigin-RevId: 284733003