On Windows, when the LLVM toolchain is in the current path (%PATH%), fusing the linker yields c:\{path}\lld.exe whereas the hip tests did not expect the .exe part.
This only happens if LLD is not present in the build folder, which consequently makes the code in Driver::GetProgramPath() to fall back to %PATH% and platform-specific search, which includes the .exe part (see https://github.com/llvm/llvm-project/blob/master/clang/lib/Driver/Driver.cpp#L4733).
Differential Revision: https://reviews.llvm.org/D76631
Upon calling one of the functions `std::advance()`, `std::prev()` and
`std::next()` iterators could get out of their valid range which leads
to undefined behavior. If all these funcions are inlined together with
the functions they call internally (e.g. `__advance()` called by
`std::advance()` in some implementations) the error is detected by
`IteratorRangeChecker` but the bug location is inside the STL
implementation. Even worse, if the budget runs out and one of the calls
is not inlined the bug remains undetected. This patch fixes this
behavior: all the bugs are detected at the point of the STL function
invocation.
Differential Revision: https://reviews.llvm.org/D76379
Whenever the analyzer budget runs out just at the point where
`std::advance()`, `std::prev()` or `std::next()` is invoked the function
are not inlined. This results in strange behavior such as
`std::prev(v.end())` equals `v.end()`. To prevent this model these
functions if they were not inlined. It may also happend that although
`std::advance()` is inlined but a function it calls inside (e.g.
`__advance()` in some implementations) is not. This case is also handled
in this patch.
Differential Revision: https://reviews.llvm.org/D76361
There's inconsistency in handling array types between the
`distributeFunctionTypeAttrXXX` functions and the
`FunctionTypeUnwrapper` in `SemaType.cpp`.
This patch lets `FunctionTypeUnwrapper` apply function type attributes
through array types.
Differential Revision: https://reviews.llvm.org/D75109
Summary: If the size parameter of `__builtin_memcpy_inline` comes from an un-instantiated template parameter current code would crash.
Reviewers: efriedma, courbet
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76504
Summary:
Since the conditional operator cannot be used with vector conditions
in C, we need a builtin to be able to express this operation in C
source.
Reviewers: aheejin
Subscribers: dschuff, sbc100, jgravelle-google, sunfish, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76538
The .i files in the clang tests (2 files) were not run by lit :
clang/test/CodeGen/debug-info-preprocessed-file.i
clang/test/FrontEnd/processed-input.i
Differential Revision: https://reviews.llvm.org/D75853
accept as an extension.
This attempts to accept the same cases a GCC, plus cases where a
comparison is rewritten to an operator== with an integral but non-bool
return type; this is sufficient to avoid most problems with various
major open-source projects (such as ICU) and appears to fix all but one
of the comparison-related C++20 build breaks in LLVM.
This approach is being pursued for standardization.
Before this patch a Clang module skeleton CU would have a
DW_AT_comp_dir pointing to the directory of the module map file, and
this information was not used by anyone. Even worse, LLDB actually
resolves relative DWO paths by appending it to DW_AT_comp_dir. This
patch sets it to the same directory that is used as the main CU's
compilation directory, which would make the LLDB code work.
Differential Revision: https://reviews.llvm.org/D76377
region.
According to OpenMP 5.0, exactly one scan directive must appear in the loop body of an enclosing worksharing-loop, worksharing-loop SIMD, or simd construct on which a reduction clause with the inscan modifier is present.
FinishThunk, and the invariant of setting and then unsetting
CurCodeDecl, was added in 7f416cc426 (2015). The invariant didn't
exist when I added this musttail codepath in ab2090d107 (2014).
Recently in 28328c3771, I started using this codepath on non-Windows
platforms, and users reported problems during release testing (PR44987).
The issue was already present for users of EH on i686-windows-msvc, so I
added a test for that case as well.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D76444
Discovered by a downstream user, we found that the CallGraph ignores
callees unless they are defined. This seems foolish, and prevents
combining the report with other reports to create unified reports.
Additionally, declarations contain information that is likely useful to
consumers of the CallGraph.
This patch implements this by splitting the includeInGraph function into
two versions, the current one plus one that is for callees only. The
only difference currently is that includeInGraph checks for a body, then
calls includeCalleeInGraph.
Differential Revision: https://reviews.llvm.org/D76435
Summary:
I've implemented them as target-specific IR intrinsics rather than
using `@llvm.experimental.vector.reduce.add`, on the grounds that the
'experimental' intrinsic doesn't currently have much code generation
benefit, and my replacements encapsulate the sign- or zero-extension
so that you don't expose the illegal MVE vector type (`<4 x i64>`) in
IR.
The machine instructions come in two versions: with and without an
input accumulator. My new IR intrinsics, like the 'experimental' one,
don't take an accumulator parameter: we represent that by just adding
on the input value using an ordinary i32 or i64 add. So if you write
the `vaddvaq` C-language intrinsic with an input accumulator of zero,
it can be optimised to VADDV, and conversely, if you write something
like `x += vaddvq(y)` then that can be combined into VADDVA.
Most of this is achieved in isel lowering, by converting these IR
intrinsics into the existing `ARMISD::VADDV` family of custom SDNode
types. For the difficult case (64-bit accumulators), isel lowering
already implements the optimization of folding an addition into a
VADDLV to make a VADDLVA; so once we've made a VADDLV, our job is
already done, except that I had to introduce a parallel set of ARMISD
nodes for the //predicated// forms of VADDLV.
For the simpler VADDV, we handle the predicated form by just leaving
the IR intrinsic alone and matching it in an ordinary dag pattern.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76491
Summary:
I've implemented these as target-specific IR intrinsics, because
they're not //quite// enough like @llvm.experimental.vector.reduce.min
(which doesn't take the extra scalar parameter). Also this keeps the
predicated and unpredicated versions looking similar, and the
floating-point minnm/maxnm versions fold into the same schema.
We had a couple of min/max reductions already implemented, from the
initial pathfinding exercise in D67158. Those were done by having
separate IR intrinsic names for the signed and unsigned integer
versions; as part of this commit, I've changed them to use a flag
parameter indicating signedness, which is how we ended up deciding
that the rest of the MVE intrinsics family ought to work. So now
hopefully the ewhole lot is consistent.
In the new llc test, the output code from the `v8f16` test functions
looks quite unpleasant, but most of it is PCS lowering (you can't pass
a `half` directly in or out of a function). In other circumstances,
where you do something else with your `half` in the same function, it
doesn't look nearly as nasty.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: MarkMurrayARM
Subscribers: kristof.beyls, hiraditya, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76490
Summary:
This patch implements the following CDE intrinsics:
int8x16_t __arm_vreinterpretq_s8_u8 (uint8x16_t in);
uint16x8_t __arm_vreinterpretq_u16_u8 (uint8x16_t in);
int16x8_t __arm_vreinterpretq_s16_u8 (uint8x16_t in);
uint32x4_t __arm_vreinterpretq_u32_u8 (uint8x16_t in);
int32x4_t __arm_vreinterpretq_s32_u8 (uint8x16_t in);
uint64x2_t __arm_vreinterpretq_u64_u8 (uint8x16_t in);
int64x2_t __arm_vreinterpretq_s64_u8 (uint8x16_t in);
float16x8_t __arm_vreinterpretq_f16_u8 (uint8x16_t in);
float32x4_t __arm_vreinterpretq_f32_u8 (uint8x16_t in);
These intrinsics are header-only because they reuse the existing
MVE vreinterpret clang built-ins.
This set is slightly different from the published specification
(see https://static.docs.arm.com/101028/0010/ACLE_2019Q4_release-0010.pdf):
it includes
int8x16_t __arm_vreinterpretq_s8_u8 (uint8x16_t in);
which was unintentionally ommitted from the spec, and
does not include
float64x2_t __arm_vreinterpretq_f64_u8 (uint8x16_t in);
The float64x2_t type requires additional implementation
effort, and we are not including it yet.
Reviewers: simon_tatham, MarkMurrayARM, dmgreen, ostannard
Reviewed By: MarkMurrayARM
Subscribers: kristof.beyls, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76300
Summary:
This patch implements the following intrinsics:
uint8x16_t __arm_vcx1q_u8 (int coproc, uint32_t imm);
T __arm_vcx1qa(int coproc, T acc, uint32_t imm);
T __arm_vcx2q(int coproc, T n, uint32_t imm);
uint8x16_t __arm_vcx2q_u8(int coproc, T n, uint32_t imm);
T __arm_vcx2qa(int coproc, T acc, U n, uint32_t imm);
T __arm_vcx3q(int coproc, T n, U m, uint32_t imm);
uint8x16_t __arm_vcx3q_u8(int coproc, T n, U m, uint32_t imm);
T __arm_vcx3qa(int coproc, T acc, U n, V m, uint32_t imm);
Most of them are polymorphic. Furthermore, some intrinsics are
polymorphic by 2 or 3 parameter types, such polymorphism is not
supported by the existing MVE/CDE tablegen backends, also we don't
really want to have a combinatorial explosion caused by 1000 different
combinations of 3 vector types. Because of this some intrinsics are
implemented as macros involving a cast of the polymorphic arguments to
uint8x16_t.
The IR intrinsics are even more restricted in terms of types: all MVE
vectors are cast to v16i8.
Reviewers: simon_tatham, MarkMurrayARM, dmgreen, ostannard
Reviewed By: MarkMurrayARM
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76299
Summary:
This change implements ACLE CDE intrinsics that translate to
instructions working with general-purpose registers.
The specification is available at
https://static.docs.arm.com/101028/0010/ACLE_2019Q4_release-0010.pdf
Each ACLE intrinsic gets a corresponding LLVM IR intrinsic (because
they have distinct function prototypes). Dual-register operands are
represented as pairs of i32 values. Because of this the instruction
selection for these intrinsics cannot be represented as TableGen
patterns and requires custom C++ code.
Reviewers: simon_tatham, MarkMurrayARM, dmgreen, ostannard
Reviewed By: MarkMurrayARM
Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76296
Summary:
Changes:
- handle immediate invocations for constructors.
- add tests
after this patch i believe the implementation of consteval is nearly standard compliant, but IR-gen still needs to be taught not to emit consteval declarations.
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: wchilders
Differential Revision: https://reviews.llvm.org/D74007
Passing small data limit to RISCVELFTargetObjectFile by module flag,
So the backend can set small data section threshold by the value.
The data will be put into the small data section if the data smaller than
the threshold.
Differential Revision: https://reviews.llvm.org/D57497
Suppress those diagnostics if lhs of a member expression contains
errors. Typo correction produces dependent expressions even in
non-template code, that led to spurious diagnostics before.
previous:
/tmp/t.cpp:6:17: error: use 'template' keyword to treat 'f' as a dependent template name
auto a = bilder.f<int>();
^
template
/tmp/t.cpp:6:10: error: use of undeclared identifier 'bilder'; did you mean 'builder'?
auto a = bilder.f<int>();
^~~~~~
builder
vs now:
/tmp/t.cpp:6:10: error: use of undeclared identifier 'bilder'; did you mean 'builder'?
auto a = bilder.f<int>();
^~~~~~
builder
Original patch from Ilya.
Reviewers: sammccall
Reviewed By: sammccall
Tags: #clang
Differential Revision: https://reviews.llvm.org/D65592
Summary:
The range checks performed for the vqrdmulh_lane and vqrdmulh_lane Neon
intrinsics were incorrectly using their return type as the base type for
the range check performed on their 'lane' argument.
This patch updates those intrisics to use the type of the proper reference
argument to perform the range checks.
Reviewers: jmolloy, t.p.northover, rsmith, olista01, dnsampaio
Reviewed By: dnsampaio
Subscribers: dnsampaio, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D74766
Summary:
Range checks were not properly performed in the lane arguments of Neon
intrinsics implemented based on splat operations. Calls to those
intrinsics where translated to `__builtin__shufflevector` calls directly
by the pre-processor through the arm_neon.h macros, missing the chance
for the proper range checks.
This patch enables the range check by introducing an auxiliary splat
instruction in arm_neon.td, delaying the translation to shufflevector
calls to CGBuiltin.cpp in clang after the checks were performed.
Reviewers: jmolloy, t.p.northover, rsmith, olista01, ostannard
Reviewed By: ostannard
Subscribers: ostannard, dnsampaio, danielkiss, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D74619
Summary:
Some of the `*_laneq` intrinsics defined in arm_neon.td were missing the
setting of the `isLaneQ` attribute. This patch sets the attribute on the
related definitions, as they will be required to properly perform range
checks on their lane arguments.
Reviewers: jmolloy, t.p.northover, rsmith, olista01, dnsampaio
Reviewed By: dnsampaio
Subscribers: dnsampaio, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D74616
The SVE ACLE allows using a short-form for the intrinsics, e.g.
the following two declarations generate the same code:
svuint32_t svld1(svbool_t, uint32_t const *);
svuint32_t svld1_u32(svbool_t, uint32_t const *);
using the attribute:
__clang_arm_builtin_alias
so that any call to svld1(svbool_t, uint32_t const *) will
map to __builtin_sve_svld1_u32.
Reviewers: SjoerdMeijer, miyuki, efriedma, simon_tatham, rengolin
Reviewed By: SjoerdMeijer
Tags: #clang
Differential Revision: https://reviews.llvm.org/D75861
Add a test for UsedDeclVisitor
This test is reduced from mlir/lib/Transforms/AffineDataCopyGeneration.cpp
to make sure there is no assertion due to UsedDeclVisitor.