The InlineAsmOp mirrors the underlying LLVM semantics with a notable
exception: the embedded `asm_string` is not allowed to define or reference
any symbol or any global variable: only the operands of the op may be read,
written, or referenced.
Attempting to define or reference any symbol or any global behavior is
considered undefined behavior at this time.
The asm dialect syntax is currently specified with an integer (0 [default] for the "att dialect", 1 for the intel dialect) to circumvent the ODS limitation on string enums.
Translation to LLVM is provided and raises the fact that the asm constraints string must be well-formed with respect to in/out operands. No check is performed on the asm_string.
An InlineAsm instruction in LLVM is a special call operation to a function that is constructed on the fly.
It does not fit the current model of MLIR calls with symbols.
As a consequence, the current implementation constructs the function type in ModuleTranslation.cpp.
This should be refactored in the future.
The mlir-cpu-runner is augmented with the global initialization of the X86 asm parser to allow proper execution in JIT mode. Previously, only the X86 asm printer was initialized.
Differential revision: https://reviews.llvm.org/D92166
Given the following case:
```
auto k() {
return undef();
return 1;
}
```
Prior to the patch, clang emits an `cannot initialize return object of type
'auto' with an rvalue of type 'int'` diagnostic on the second return
(because the return type of the function cannot be deduced from the first contain-errors return).
This patch suppresses this error.
Differential Revision: https://reviews.llvm.org/D92211
A separate thread is not necessary, as we can do its work on the main
thread, while waiting for the packet to arrive. This makes the code
easier to understand and debug (other simplifications are possible too,
but I'll leave that for separate patches). The new implementation also
avoids busy waiting.
If we decided to widen IV with zext, then unsigned comparisons
should not prevent widening (same for sext/sign comparisons).
The result of comparison in wider type does not change in this case.
Differential Revision: https://reviews.llvm.org/D92207
Reviewed By: nikic
class to the declaring class in a class member access.
This check does not appear to be backed by any rule in the standard (the
rule in question was likely removed over the years), and only ever
produces duplicate diagnostics. (It's also not meaningful because there
isn't a unique declaring class after the resolution of core issue 39.)
* If ODS redefines this, it is fine, but I have found this accessor to be universally useful in the old npcomp bindings and I'm closing gaps that will let me switch.
Differential Revision: https://reviews.llvm.org/D92287
* Add capsule get/create for Attribute and Type, which already had capsule interop defined.
* Add capsule interop and get/create for Location.
* Add Location __eq__.
* Use get() and implicit cast to go from PyAttribute, PyType, PyLocation to MlirAttribute, MlirType, MlirLocation (bundled with this change because I didn't want to continue the pattern one more time).
Differential Revision: https://reviews.llvm.org/D92283
For --gc-sections, SmallVector<InputSection *, 256> -> SmallVector<InputSection *, 0> because the code bloat (1296 bytes) is not worthwhile (the saved reallocation is negligible).
For OutputSection::compressedData, N=1 is useless (for a compressed .debug_*, the size is always larger than 1).
Information for pointer size/alignment/etc is queried a lot, but
the binary search based implementation makes this fairly slow.
Add an explicit check for address space zero and skip the search
in that case -- we need to specially handle the zero address space
anyway, as it serves as the fallback for all address spaces that
were not explicitly defined.
I initially wanted to simply replace the binary search with a
linear search, which would handle both address space zero and the
general case efficiently, but I was not sure whether there are
any degenerate targets that use more than a handful of declared
address spaces (in-tree, even AMDGPU only declares six).
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.
Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).
This commit reverts d9da4c3e73. Fixes
missing headers (don't know how that was working locally).
Differential Revision: https://reviews.llvm.org/D91672
Interleave groups also depend on the values they store. Manage the
stored values as VPUser operands. This is currently a NFC, but is
required to allow VPlan transforms and to manage generated vector values
exclusively in VPTransformState.
As suggested in D92247 (and independent of whatever we decide to do there),
this code is confusing as-is. Hopefully, this is at least mildly better.
We might be able to do better still, but we have a function called
"removePredecessor" with this behavior:
"Note that this function does not actually remove the predecessor." (!)
x86-64 ILP32 mode (x32) uses 32-bit size_t, so share the code with ix86 to zero out padding bits, not with x86-64 LP64 mode.
Reviewed By: #libc, ldionne
Differential Revision: https://reviews.llvm.org/D91349
This cache went away in 73fdd99870
This time, the cache is periodically validated against disk, so config
is still mostly "live".
The per-file cache reuses FileCache, but the tree-of-file-caches is
duplicated from ConfigProvider. .clangd, .clang-tidy, .clang-format, and
compile_commands.json all have this pattern, we should extract it at some point.
TODO for now though.
Differential Revision: https://reviews.llvm.org/D92133
For recursive phis, we skip the recursive operands and check that
the remaining operands are NoAlias with an unknown size. Currently,
this is limited to inbounds GEPs with positive offsets, to
guarantee that the recursion only ever increases the pointer.
Make this more general by only requiring that the underlying object
of the phi operand is the phi itself, i.e. it it based on itself in
some way. To compensate, we need to use a beforeOrAfterPointer()
location size, as we no longer have the guarantee that the pointer
is strictly increasing.
This allows us to handle some additional cases like negative geps,
geps with dynamic offsets or geps that aren't inbounds.
Differential Revision: https://reviews.llvm.org/D91914
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.
Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).
Differential Revision: https://reviews.llvm.org/D91672
This retrieves CPU affinity via FreeBSD's cpuset(2) API, and makes LLVM
respect affinity settings configured by the user via the cpuset(1)
command.
In particular, this allows to reduce the number of threads used on
machines with high core counts, which can interact badly with
parallelized build systems. This is particularly noticable with lld,
which spawns lots of threads even for linking e.g. hello_world!
This fix is related to PR48193, but does not adress the more fundamental
problem, which is that LLVM by default grabs as many CPUs and/or threads
as possible.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D92271
I took the "Permitted"/"Not Permitted" combo from the `Tag_ARM_ISA_use` case (GNU tools print "Yes").
Reviewed By: compnerd, MaskRay, simon_tatham
Differential Revision: https://reviews.llvm.org/D90305
This ensures that failures show up in regular builds, rather than only
when expensive checks are enabled.
Differential Revision: https://reviews.llvm.org/D91339
The build bots caught two additional pre-existing problems exposed by the test change part of my change https://reviews.llvm.org/D91339, when expensive checks are enabled. https://reviews.llvm.org/D91924 fixes one of them, this fixes the other.
FixupSetCC will change code in the form of
%setcc = SETCCr ...
%ext1 = MOVZX32rr8 %setcc
to
%zero = MOV32r0
%setcc = SETCCr ...
%ext2 = INSERT_SUBREG %zero, %setcc, %subreg.sub_8bit
and replace uses of %ext1 with %ext2.
The register class for %ext2 did not take into account any constraints on %ext1, which may have been required by its uses. This change ensures that the original constraints are honoured, by instead of creating a new %ext2 register, reusing %ext1 and further constraining it as needed. This requires a slight reorganisation to account for the fact that it is possible for the constraining to fail, in which case no changes should be made.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D91933
The build bots caught two additional pre-existing problems exposed by the test change part of my change https://reviews.llvm.org/D91339, when expensive checks are enabled. This fixes one of them.
X86 has CALL64r and CALL32r opcodes, where CALL64r takes a 64-bit register, and CALL32r takes a 32-bit register. CALL64r can only be used in 64-bit mode, CALL32r can only be used in 32-bit mode. LLVM would assume that after picking the appropriate CALLr opcode, a pointer-sized register would be a valid operand, but in x32 mode, a 64-bit mode, pointers are 32 bits. In this mode, it is invalid to directly pass a pointer to CALL64r, it needs to be extended to 64 bits first.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D91924
Makes it easier to diagnose remote index issues with --debug-origins flag.
Reviewed By: sammccall
Differential Revision: https://reviews.llvm.org/D92202