This reverts commit e9b87f43bd.
There are issues with macros generating macros without an obvious simple fix
so I'm going to revert this and try something different.
New projects (particularly out of tree) have a tendency to hijack the existing
llvm configuration options and build targets (add_llvm_library,
add_llvm_tool). This can lead to some confusion.
1) When querying a configuration variable, do we care about how LLVM was
configured, or how these options were configured for the out of tree project?
2) LLVM has lots of defaults, which are easy to miss
(e.g. LLVM_BUILD_TOOLS=ON). These options all need to be duplicated in the
CMakeLists.txt for the project.
In addition, with LLVM Incubators coming online, we need better ways for these
incubators to do things the "LLVM way" without alot of futzing. Ideally, this
would happen in a way that eases importing into the LLVM monorepo when
projects mature.
This patch creates some generic infrastructure in llvm/cmake/modules and
refactors MLIR to use this infrastructure. This should expand to include
add_xxx_library, which is by far the most complicated bit of building a
project correctly, since it has to deal with lots of shared library
configuration bits. (MLIR currently hijacks the LLVM infrastructure for
building libMLIR.so, so this needs to get refactored anyway.)
Differential Revision: https://reviews.llvm.org/D85140
Introduce an initial version of C API for MLIR core IR components: Value, Type,
Attribute, Operation, Region, Block, Location. These APIs allow for both
inspection and creation of the IR in the generic form and intended for wrapping
in high-level library- and language-specific constructs. At this point, there
is no stability guarantee provided for the API.
Reviewed By: stellaraccident, lattner
Differential Revision: https://reviews.llvm.org/D83310
Summary:
* Native '_mlir' extension module.
* Python mlir/__init__.py trampoline module.
* Lit test that checks a message.
* Uses some cmake configurations that have worked for me in the past but likely needs further elaboration.
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D83279
Summary:
Previous submit of new tests accidentally made this ON.
The tests should be opt-in.
To build with MLIR integration tests enabled, pass the following
cmake .... \
-DMLIR_INCLUDE_INTEGRATION_TESTS=ON \
....
Reviewers: mehdi_amini
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D81878
Summary:
This CL introduces an integration test directory for MLIR in general, with
vector dialect integration tests in particular as a first working suite. To
run all the integration tests (and currently just the vector suite):
$ cmake --build . --target check-mlir-integration
[0/1] Running the MLIR integration tests
Testing Time: 0.24s
Passed: 22
The general call is to contribute to this integration test directory with more
tests and other suites, running end-to-end examples that may be too heavy for
the regular test directory, but should be tested occasionally to verify the
health of MLIR.
Background discussion at:
https://llvm.discourse.group/t/vectorops-rfc-add-suite-of-integration-tests-for-vector-dialect-operations/1213/
Reviewers: nicolasvasilache, reidtatge, andydavis1, rriddle, ftynse, mehdi_amini, jpienaar, stephenneuendorffer
Reviewed By: nicolasvasilache, stephenneuendorffer
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D81626
Summary:
`mlir-rocm-runner` is introduced in this commit to execute GPU modules on ROCm
platform. A small wrapper to encapsulate ROCm's HIP runtime API is also inside
the commit.
Due to behavior of ROCm, raw pointers inside memrefs passed to `gpu.launch`
must be modified on the host side to properly capture the pointer values
addressable on the GPU.
LLVM MC is used to assemble AMD GCN ISA coming out from
`ConvertGPUKernelToBlobPass` to binary form, and LLD is used to produce a shared
ELF object which could be loaded by ROCm HIP runtime.
gfx900 is the default target be used right now, although it could be altered via
an option in `mlir-rocm-runner`. Future revisions may consider using ROCm Agent
Enumerator to detect the right target on the system.
Notice AMDGPU Code Object V2 is used in this revision. Future enhancements may
upgrade to AMDGPU Code Object V3.
Bitcode libraries in ROCm-Device-Libs, which implements math routines exposed in
`rocdl` dialect are not yet linked, and is left as a TODO in the logic.
Reviewers: herhut
Subscribers: mgorny, tpr, dexonsmith, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #mlir, #llvm
Differential Revision: https://reviews.llvm.org/D80676
Make ConvertKernelFuncToCubin pass to be generic:
- Rename to ConvertKernelFuncToBlob.
- Allow specifying triple, target chip, target features.
- Initializing LLVM backend is supplied by a callback function.
- Lowering process from MLIR module to LLVM module is via another callback.
- Change mlir-cuda-runner to adopt the revised pass.
- Add new tests for lowering to ROCm HSA code object (HSACO).
- Tests for CUDA and ROCm are kept in separate directories.
Differential Revision: https://reviews.llvm.org/D80142
In cmake, dependencies on generated files require some sophistication in the build system. At build time, files are parsed to determine which headers they depend on and these dependencies are injected into the build system. This works well with ninja, but has some constraints with the makefile generator. According to the cmake documentation, this only works reliably within the same directory.
This patch expands the usage of mlir-headers to include all generated headers and adds an mlir-generic-headers target which triggers generation of dialect-independent headers. These targets are used to express dependencies on generated headers. This is mostly handled in AddMLIR.cmake and only a few CMakeLists.txt files need to change.
Differential Revision: https://reviews.llvm.org/D79242
Define MLIR_MAIN_INCLUDE_DIR, as it was not set anywhere.
Set MLIR_MAIN_SRC_DIR to the actual "source directory", and not the
"include directory" (as currently set).
Differential Revision: https://reviews.llvm.org/D77943
Setting MLIR_TABLEGEN_EXE would prevent building the native tool which is used in cross-compiling
Differential Revision: https://reviews.llvm.org/D75299
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components.
Previous version of this patch broke depencies on TableGen
targets. This appears to be because it compiled all
libraries to OBJECT libraries (probably because cmake
is generating different target names). Avoiding object
libraries results in correct dependencies.
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components.
Previous version of this patch broke depencies on TableGen
targets. This appears to be because it compiled all
libraries to OBJECT libraries (probably because cmake
is generating different target names). Avoiding object
libraries results in correct dependencies.
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
Instead of creating extra libraries we don't really need, collect a
list of all dialects and use that instead.
Differential Revision: https://reviews.llvm.org/D75221
Add an initial version of mlir-vulkan-runner execution driver.
A command line utility that executes a MLIR file on the Vulkan by
translating MLIR GPU module to SPIR-V and host part to LLVM IR before
JIT-compiling and executing the latter.
Differential Revision: https://reviews.llvm.org/D72696
This pass would currently build, but fail to run when this backend isn't
linked in. On the other hand, we'd like it to initialize only the NVPTX
backend, which isn't possible if we continue to build it without the
backend available. Instead of building a broken configuration, let's
skip building the pass entirely.
Differential Revision: https://reviews.llvm.org/D74592
Summary: Right now the path for each lib in whole_archive_link when MSVC is used as the compiler is not a full path - and it's not even the correct path when VS is used to build. This patch sets the lib path to a full path using CMAKE_CFG_INTDIR which means the path will be correct regardless of whether ninja, make or VS is used and it will always be a full path.
Reviewers: denis13, jpienaar
Reviewed By: jpienaar
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, llvm-commits, asmith
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72403
This is needed to consume mlir after it has been installed of the source
tree. Without this, consuming mlir results a build error.
Differential Revision: https://reviews.llvm.org/D72232
Summary:
Prior to this, "ninja install-mlir-headers" failed with an error indicating
the missing target. Verified that from a clean build, the installed
headers include generated files.
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72045
The issue is that /WHOLEARCHIVE is interpreted differently in LLD, which needs the same exact path as the .lib; whereas link.exe can take the library name, withoutout a path or extension, if that was already supplied on the cmd-line. I'll write a follow-up patch to fix the issue in LLD.
Currently when you build the `install` target, TableGen files don't get
installed.
TableGen files are needed when authoring new MLIR dialects, but right
now they're missing when using the pre-built binaries.
Differential Revision: https://reviews.llvm.org/D71958
This CL allows specifying an additional name for specifying the .td file that is used to generate the doc for a dialect. This is necessary for a dialect like Linalg which has different "types" of ops that are used in different contexts.
This CL also restructures the Linalg documentation and renames LinalgLibraryOps -> LinalgStructuredOps but is otherwise NFC.
PiperOrigin-RevId: 286450414
This tool allows to execute MLIR IR snippets written in the GPU dialect
on a CUDA capable GPU. For this to work, a working CUDA install is required
and the build has to be configured with MLIR_CUDA_RUNNER_ENABLED set to 1.
PiperOrigin-RevId: 256551415
The actual transformation from PTX source to a CUDA binary is now factored out,
enabling compiling and testing the transformations independently of a CUDA
runtime.
MLIR has still to be built with NVPTX target support for the conversions to be
built and tested.
PiperOrigin-RevId: 255167139
The -all_load flag will apply to all future libraries added on the command line,
while the -force_load flag only applies to the next library. Using the latter
allows to selectively force load the specific libraries we want.
--
PiperOrigin-RevId: 244949770
This is making up for some differences in standard library and linker flags.
It also get rid of the requirement to build with RTTI.
--
PiperOrigin-RevId: 241348845