Address mistakenly comparing the pointer values of two C-style strings
rather than comparing their contents in the unit tests for makeVisitor,
added in 6d6f35eb7b
Now that Lit supports regular expressions inside XFAIL & friends, it is
much easier to write Lit annotations based on the triple.
Differential Revision: https://reviews.llvm.org/D104747
Summary:
If we are on c++03 mode for some reason, and __builtin_va_copy is
available, then use it instead of error out on not having va_copy
in 03 mode.
Reviewed by: ldionne
Differential Revision: https://reviews.llvm.org/D100336
This is required to run the tests under any configuration that uses
additional_features using a from-scratch config. That is the case of
e.g. the Debug mode (which uses LIBCXX-DEBUG-FIXME) and the tests on
Windows.
This follows up to D104665 (which added umulo handling alongside the existing uaddo case), and generalizes for the remaining overflow intrinsics.
I went to add analogous handling to LVI, and discovered that LVI already had a more general implementation. Instead, we can port was LVI does to instcombine. (For context, LVI uses makeExactNoWrapRegion to constrain the value 'x' in blocks reached after a branch on the condition `op.with.overflow(x, C).overflow`.)
Differential Revision: https://reviews.llvm.org/D104932
A user reported an issue to me via email that Clang was accepting some
code that GCC was rejecting. After investigation, it turned out to be a
general problem of us failing to properly reject attributes written in
the type position in C when they don't apply to types. The root cause
was a terminology issue -- we sometimes use "CXX11Attr" to mean [[]] in
C++11 mode and sometimes [[]] in general -- and this came back to bite
us because in this particular case, it really meant [[]] in C++ mode.
I fixed the issue by introducing a new function
AttributeCommonInfo::isStandardAttributeSyntax() to represent [[]] in
either C or C++ mode.
This fix pointed out that we've had the issue in some of our existing
tests, which have all been corrected. This resolves
https://bugs.llvm.org/show_bug.cgi?id=50954.
This results in significant deduplication of code. This patch is not expected to change any functionality, it's just some simplification in preparation for future work. Also slightly simplified some code that was being touched anyway and added some unit tests for some functions that were touched.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D105152
`DeviceTy::getOrAllocTgtPtr` just returns a target pointer. In addition,
two bool values (`IsNew` and `IsHostPtr`) are passed by reference to make the
change in the function available in callee.
In this patch, a struct, which contains the target pointer, two flags, and an
iterator to the map table entry corresponding to the queried host pointer, will
be returned. In addition to make the logic clearer regarding the two bool values,
this paves the way for the next patch to fix the data race in `bug49334.cpp` by
attaching an event to the map table entry (and that's why we need the iterator).
Reviewed By: grokos
Differential Revision: https://reviews.llvm.org/D104382
Rationale:
Follow-up on migrating lattice and tensor expression related methods into the new utility.
This also prepares the next step of generalizing the op kinds that are handled.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D105219
This adds support for opaque pointers in intrinsic type checks
of IIT kind Pointer and PtrToElt.
This is less straight-forward than it might initially seem, because
we should only accept opaque pointers here in --force-opaque-pointers
mode. Otherwise, there would be more than one valid type signature
for a given intrinsic name.
Differential Revision: https://reviews.llvm.org/D105155
Inserting into a smaller-than-legal scalable vector would result in an
internal compiler error. For example, inserting a <vscale x 4 x i8> into
a <vscale x 8 x i8> (both illegal vector types for SVE) would cause a
crash.
This crash was happening because there was no code to promote (legalise)
the result of an INSERT_SUBVECTOR node.
This patch implements PromoteIntRes_INSERT_SUBVECTOR, which legalises
the ISD node. This is currently done by going through memory. This is
necessary because of the requirement that the SubVec parameter of the
INSERT_SUBVECTOR node must be smaller than the Vec parameter, which
means that INSERT_SUBVECTOR cannot always have a legal result/operand
types.
Co-Authored-by: Joe Ellis <joe.ellis@arm.com>
Differential Revision: https://reviews.llvm.org/D102766
This reverts b56e5f8a10 (and follow-up f6db88535c) and instead
restores the state we had before 0c96a92d8666b8: ClangdMain.cpp
includes Features.inc before including Transport.h.
This is a bit ugly, but it matches the former state and making Transport.h
include Features.h means that xpc/ needs to be able to find the generated
Features.inc, wich is also a bit ugly.
Building on rG2a1ef8784ad9a, adjust the SSE cost tables to use the legalized types based on the worst case costs from the script in D103695.
To account for different numbers of src/dst legalized type registers we must scale the cost by maximum of the src/dst, not just use src
Much like fixed-point to floating-point conversion, the converse can
also be transformed into a fixed-point VCVT. This patch transforms
multiplications of floating point numbers by 2^n into a VCVT_fix. The
exception is that a float to fixed conversion with 1 fractional bit
ends up being an FADD (FADD(x, x) emulates FMUL(x, 2)) rather than an FMUL so there is a special case for that. This patch also moves the code from https://reviews.llvm.org/D103903 into a separate function as fixed to float and float to fixed are very similar.
Differential Revision: https://reviews.llvm.org/D104793
In lots of places we were calling setDebugLocFromInst and passing
in the same Builder member variable found in InnerLoopVectorizer.
I personally found this confusing so I've changed the interface
to take an Optional<IRBuilder<> *> and we can now pass in None
when we want to use the class member variable.
Differential Revision: https://reviews.llvm.org/D105100
Until now, `f18` would:
1. Use Flang to unparse the input files
2. Call an external Fortran compiler to compile the unparsed source
files (generated in step 1)
With this patch, `f18` will stop after unparsing the input source files,
i.e. step 1 above. The `flang` bash script will take care of step 2,
i.e. calling an external Fortran compiler driver to compile them. This
way:
* the functionality of `f18` is reduced - it will only drive Flang (as
opposed to delegating code-generation to an external tool on top of
this)
* we will able to switch between `f18` and `flang-new` for unparsing before
an external Fortran compiler is called for code-generation
The updated `flang` bash script needs to specify the output file when
using the `-fdebug-unparse` action. Both `f18` and `flang-new` have been
updated accordingly.
These changes were discussed in [1] as a requirement for replacing `f18`
with `flang-new`.
[1] https://lists.llvm.org/pipermail/flang-dev/2021-April/000677.html
Differential Revision: https://reviews.llvm.org/D103177
Before this patch, the dependence of CallExpr was only computed in the
constructor, the dependence bits might not reflect truth -- some arguments might
be not set (nullptr) during this time, e.g. CXXDefaultArgExpr will be set via
the setArg method in the later parsing stage, so we need to recompute the
dependence bits.
Added in 47c3fe2a22, we sometimes need to describe a variable value
substitution with a subregister qualifier, to say that "the value is the
lower 32 bits of this 64 bit register def" for example. That then needs
support during LiveDebugValues to interpret the subregister qualifiers,
which is what this patch adds.
Whenever we encounter a DBG_INSTR_REF and find its value by using a
substitution, collect any subregister qualifiers seen. Then, accumulate the
effects of the qualifiers to work out what offset and what size should be
extracted from the defined register. Finally, for the target ValueIDNum,
extract whatever subregister is in the correct position
Currently, describing a subregister field of a larger value that has been
spilt to the stack, is unimplemented.
Differential Revision: https://reviews.llvm.org/D88894
Based on the discussion in PR50922, minor changes have been done to properly
output a valid JSON. Removed "not implemented" keys.
Differential Revision: https://reviews.llvm.org/D105064
ConstantOp are only supported in the ModulePass because they require a GlobalCreator object that must be constructed from a ModuleOp.
If the standlaone FunctionPass encounters a ConstantOp, bufferization fails.
Differential revision: https://reviews.llvm.org/D105156
This revision drops the comprehensive bufferization Function pass, which has issues when trying to bufferize constants.
Instead, only support the comprehensive-module-bufferize by default.
Differential Revision: https://reviews.llvm.org/D105228
This patch adds intrinsic definitions and SDNodes for predicated
load/store/gather/scatter, based on the work done in D57504.
Reviewed By: simoll, craig.topper
Differential Revision: https://reviews.llvm.org/D99355
Also add an integration test that connects all the dots end to end, including with cast to unranked tensor for external library calls.
Differential Revision: https://reviews.llvm.org/D105106
Since gather lowering can now lower to nodes that may need expansion via
the vector legalizer, do MGATHER lowering via vector legalizer.
Additionally, as part of adding passthru support for fixed typed
gathers, fix passthru support for scalable types.
Depends on D104910
Differential Revision: https://reviews.llvm.org/D104217
Move the (SSE-only) generic, legalized type conversion matching after the specific,custom conversion cases, allowing us to properly provide cost overrides.
The next step will be to clean up some of the weird existing costs and then to enable AVX+ legalized costs, which will let us strip out a lot of the cost tables entries.
Cross function boundary bufferization support is added.
This is enabled by cross-function boundary alias analysis, for which the bufferization process is extended: it can now modify the BufferizationAliasInfo as new ops are introduced.
A number of simplifying assumptions are made:
1. by default we bufferize to the most dynamic strided memref type, further memref::CastOp canonicalizations are expected to clean up the IR.
2. in the current implementation, the stride information is always erased at function boundaries. A subsequent pass will be required to analyze the meet of all call ops to a function and decide whether more static buffer types can be used. This will potentially clone functions when it is deemed profitable to do so (e.g. when the stride-1 dimension may vary).
3. external function always bufferize to the most dynamic strided memref version. This may require special annotations for specifying that particular operands of top-level functions have contiguous buffer layout.
An alternative to point 3. would be to support tensor layout annotations, which is currently not supported in MLIR.
Differential revision: https://reviews.llvm.org/D104873