Split out GPU ops library from GPU transforms. This allows libraries to
depend on GPU Ops without needing/building its transforms.
Differential Revision: https://reviews.llvm.org/D105472
The commit in question changed the syntax but did not update the runner
tests. This also required registering the MemRef dialect for custom
parser to work correctly.
There is no need for the interface implementations to be exposed, opaque
registration functions are sufficient for all users, similarly to passes.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D97852
A series of preceding patches changed the mechanism for translating MLIR to
LLVM IR to use dialect interface with delayed registration. It is no longer
necessary for specific dialects to derive from ModuleTranslation. Remove all
virtual methods from ModuleTranslation and factor out the entry point to be a
free function.
Also perform some cleanups in ModuleTranslation internals.
Depends On D96774
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D96775
The existing approach to translation to the LLVM IR relies on a single
translation supporting the base LLVM dialect, extensible through inheritance to
support intrinsic-based dialects also derived from LLVM IR such as NVVM and
AVX512. This approach does not scale well as it requires additional
translations to be created for each new intrinsic-based dialect and does not
allow them to mix in the same module, contrary to the rest of the MLIR
infrastructure. Furthermore, OpenMP translation ingrained itself into the main
translation mechanism.
Start refactoring the translation to LLVM IR to operate using dialect
interfaces. Each dialect that contains ops translatable to LLVM IR can
implement the interface for translating them, and the top-level translation
driver can operate on interfaces without knowing about specific dialects.
Furthermore, the delayed dialect registration mechanism allows one to avoid a
dependency on LLVM IR in the dialect that is translated to it by implementing
the translation as a separate library and only registering it at the client
level.
This change introduces the new mechanism and factors out the translation of the
"main" LLVM dialect. The remaining dialects will follow suit.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D96503
Historically, JitRunner has been registering all available dialects with the
context and depending on them without the real need. Make it take a registry
that contains only the dialects that are expected in the input and stop linking
in all dialects.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D96436
This commit renames various SPIR-V related conversion files for
consistency. It drops the "Convert" prefix to various files and
fixes various comment headers.
Reviewed By: hanchung, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D93489
This commit shuffles SPIR-V code around to better follow MLIR
convention. Specifically,
* Created IR/, Transforms/, Linking/, and Utils/ subdirectories and
moved suitable code inside.
* Created SPIRVEnums.{h|cpp} for SPIR-V C/C++ enums generated from
SPIR-V spec. Previously they are cluttered inside SPIRVTypes.{h|cpp}.
* Fixed include guards in various header files (both .h and .td).
* Moved serialization tests under test/Target/SPIRV.
* Renamed TableGen backend -gen-spirv-op-utils into -gen-spirv-attr-utils
as it is only generating utility functions for attributes.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D93407
This patch introduces a SPIR-V runner. The aim is to run a gpu
kernel on a CPU via GPU -> SPIRV -> LLVM conversions. This is a first
prototype, so more features will be added in due time.
- Overview
The runner follows similar flow as the other runners in-tree. However,
having converted the kernel to SPIR-V, we encode the bind attributes of
global variables that represent kernel arguments. Then SPIR-V module is
converted to LLVM. On the host side, we emulate passing the data to device
by creating in main module globals with the same symbolic name as in kernel
module. These global variables are later linked with ones from the nested
module. We copy data from kernel arguments to globals, call the kernel
function from nested module and then copy the data back.
- Current state
At the moment, the runner is capable of running 2 modules, nested one in
another. The kernel module must contain exactly one kernel function. Also,
the runner supports rank 1 integer memref types as arguments (to be scaled).
- Enhancement of JitRunner and ExecutionEngine
To translate nested modules to LLVM IR, JitRunner and ExecutionEngine were
altered to take an optional (default to `nullptr`) function reference that
is a custom LLVM IR module builder. This allows to customize LLVM IR module
creation from MLIR modules.
Reviewed By: ftynse, mravishankar
Differential Revision: https://reviews.llvm.org/D86108