The goal is support tail and mask policy in RVV builtins.
We focus on IR part first.
If the passthru operand is undef, we use tail agnostic, otherwise
use tail undisturbed.
Co-Authored-by: Hsiangkai Wang <Hsiangkai@gmail.com>
Reviewers: craig.topper, frasercrmck
Differential Revision: https://reviews.llvm.org/D117647
Problem: Migration to new PM broke flatten attribute.
This is one use case why LLVM should support inlining call-site with alwaysinline. The flatten attribute is nowdays broken, so we should either land patch like this one or remove everything related to flatten attribute from Clang.
Second use case is something like "per call site inlining intrinsics" to control inlining even more; mentioned in
https://lists.llvm.org/pipermail/cfe-dev/2018-September/059232.html
Fixes https://github.com/llvm/llvm-project/issues/53360
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D117965
According to v-spec 1.0, `vmulh`, `vmulhu`, `vmulhsu` and `vsmul` are
NOT supported for EEW=64 in Zve64*.
This patch tries to guard it correctly.
Authored by: Craig Topper <craig.topper@sifive.com> @craig.topper
Co-Authored by: Eop Chen <eop.chen@sifive.com> @eopXD
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D117913
D111985 added the generic `__builtin_elementwise_max` and `__builtin_elementwise_min` intrinsics with the same integer behaviour as the SSE/AVX instructions
This patch removes the `__builtin_ia32_pmax/min` intrinsics and just uses `__builtin_elementwise_max/min` - the existing tests see no changes:
```
__m256i test_mm256_max_epu32(__m256i a, __m256i b) {
// CHECK-LABEL: test_mm256_max_epu32
// CHECK: call <8 x i32> @llvm.umax.v8i32(<8 x i32> %{{.*}}, <8 x i32> %{{.*}})
return _mm256_max_epu32(a, b);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).
Sibling patch to D117791
Differential Revision: https://reviews.llvm.org/D117798
D111986 added the generic `__builtin_elementwise_abs()` intrinsic with the same integer absolute behaviour as the SSE/AVX instructions (abs(INT_MIN) == INT_MIN)
This patch removes the `__builtin_ia32_pabs*` intrinsics and just uses `__builtin_elementwise_abs` - the existing tests see no changes:
```
__m256i test_mm256_abs_epi8(__m256i a) {
// CHECK-LABEL: test_mm256_abs_epi8
// CHECK: [[ABS:%.*]] = call <32 x i8> @llvm.abs.v32i8(<32 x i8> %{{.*}}, i1 false)
return _mm256_abs_epi8(a);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).
Differential Revision: https://reviews.llvm.org/D117791
According to the spec, there are some difference between V and Zve64d. For example, the vmulh integer multiply variants that return the high word of the product (vmulh.vv, vmulh.vx, vmulhu.vv, vmulhu.vx, vmulhsu.vv, vmulhsu.vx) are not included for EEW=64 in Zve64*, but V extension does support these instructions. So we should decouple Zve* extensions and the V extension.
Differential Revision: https://reviews.llvm.org/D117854
MSVC currently doesn't support 80 bits long double. ICC supports it when
the option `/Qlong-double` is specified. Changing the alignment of f80
to 16 bytes so that we can be compatible with ICC's option.
Reviewed By: rnk, craig.topper
Differential Revision: https://reviews.llvm.org/D115942
D111985 added the generic `__builtin_elementwise_max` and `__builtin_elementwise_min` intrinsics with the same integer behaviour as the SSE/AVX instructions
This patch removes the `__builtin_ia32_pmax/min` intrinsics and just uses `__builtin_elementwise_max/min` - the existing tests see no changes:
```
__m256i test_mm256_max_epu32(__m256i a, __m256i b) {
// CHECK-LABEL: test_mm256_max_epu32
// CHECK: call <8 x i32> @llvm.umax.v8i32(<8 x i32> %{{.*}}, <8 x i32> %{{.*}})
return _mm256_max_epu32(a, b);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).
Sibling patch to D117791
Differential Revision: https://reviews.llvm.org/D117798
D111986 added the generic `__builtin_elementwise_abs()` intrinsic with the same integer absolute behaviour as the SSE/AVX instructions (abs(INT_MIN) == INT_MIN)
This patch removes the `__builtin_ia32_pabs*` intrinsics and just uses `__builtin_elementwise_abs` - the existing tests see no changes:
```
__m256i test_mm256_abs_epi8(__m256i a) {
// CHECK-LABEL: test_mm256_abs_epi8
// CHECK: [[ABS:%.*]] = call <32 x i8> @llvm.abs.v32i8(<32 x i8> %{{.*}}, i1 false)
return _mm256_abs_epi8(a);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).
Differential Revision: https://reviews.llvm.org/D117791
This string no longer appears in the Vector Extension specification.
The segment load/store instructions are just part of the vector
instruction set.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D117724
This patch adds support for the MSVC /HOTPATCH flag: https://docs.microsoft.com/sv-se/cpp/build/reference/hotpatch-create-hotpatchable-image?view=msvc-170&viewFallbackFrom=vs-2019
The flag is translated to a new -fms-hotpatch flag, which in turn adds a 'patchable-function' attribute for each function in the TU. This is then picked up by the PatchableFunction pass which would generate a TargetOpcode::PATCHABLE_OP of minsize = 2 (which means the target instruction must resolve to at least two bytes). TargetOpcode::PATCHABLE_OP is only implemented for x86/x64. When targetting ARM/ARM64, /HOTPATCH isn't required (instructions are always 2/4 bytes and suitable for hotpatching).
Additionally, when using /Z7, we generate a 'hot patchable' flag in the CodeView debug stream, in the S_COMPILE3 record. This flag is then picked up by LLD (or link.exe) and is used in conjunction with the linker /FUNCTIONPADMIN flag to generate extra space before each function, to accommodate for live patching long jumps. Please see: d703b92296/lld/COFF/Writer.cpp (L1298)
The outcome is that we can finally use Live++ or Recode along with clang-cl.
NOTE: It seems that MSVC cl.exe always enables /HOTPATCH on x64 by default, although if we did the same I thought we might generate sub-optimal code (if this flag was active by default). Additionally, MSVC always generates a .debug$S section and a S_COMPILE3 record, which Clang doesn't do without /Z7. Therefore, the following MSVC command-line "cl /c file.cpp" would have to be written with Clang such as "clang-cl /c file.cpp /HOTPATCH /Z7" in order to obtain the same result.
Depends on D43002, D80833 and D81301 for the full feature.
Differential Revision: https://reviews.llvm.org/D116511
When adding new attributes, existing attributes are dropped. While
this appears to be a longstanding issue, this was highlighted by D105169
which dropped a lot of attributes due to adding the new noundef
attribute.
Ahmed Bougacha (@ab) tracked down the issue and provided the fix in
CGCall.cpp. I bundled it up and updated the tests.
Similar to the migration of or-folding to FoldOr, there are a few cases
where the fold in IRBuilder::CreateAnd triggered directly. Those have
been updated.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D117431
This patch writes the full -cc1 command into the resulting .OBJ, like MSVC does. This allows for external tools (Recode, Live++) to rebuild a source file without any external dependency but the .OBJ itself (other than the compiler) and without knowledge of the build system.
The LF_BUILDINFO record stores a full path to the compiler, the PWD (CWD at program startup), a relative or absolute path to the source, and the full CC1 command line. The stored command line is self-standing (does not depend on the environment). In the same way, MSVC doesn't exactly store the provided command-line, but an expanded version (a somehow equivalent of CC1) which is also self-standing.
For more information see PR36198 and D43002.
Differential Revision: https://reviews.llvm.org/D80833
This patch emits an error on AIX and z/OS because XCOFF and GOFF does not currently implement builtin function `CFStringMakeConstantString`. Tests that use this builtin were also disabled.
Reviewed By: SeanP
Differential Revision: https://reviews.llvm.org/D117315
Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169
Cases where there is a mangling of a cpu-dispatch/cpu-specific function
before the function becomes 'multiversion' (such as a member function)
causes the wrong name to be emitted for one of the variants/resolver,
since the name is cached. Make sure we invalidate the cache in
cpu-dispatch/cpu-specific modes, like we previously did for just target
multiversioning.
This patch implements two builtins specified in D111529.
The last __builtin_reduce_add will be seperated into another one.
Differential Revision: https://reviews.llvm.org/D116736
With the introduction of this flag, it is no longer necessary to enable noundef analysis with 4 separate flags.
(-Xclang -enable-noundef-analysis -mllvm -msan-eager-checks=1).
This change only covers the introduction into the compiler.
This is a follow up to: https://reviews.llvm.org/D116855
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D116633
Functions pointers should be created with program address space. This
patch introduces program address space in TargetInfo. Targets with
non-default (default is 0) address space for functions should explicitly
set this value. This patch fixes a crash on lvalue reference to function
pointer (in device code) when using oneAPI DPC++ compiler.
Differential Revision: https://reviews.llvm.org/D111566
Agreed policy is that RISC-V extensions that have not yet been ratified
should be marked as experimental, and enabling them requires the use of
the -menable-experimental-extensions flag when using clang alongside the
version number. These extensions have now been ratified, so this is no
longer necessary, and the target feature names can be renamed to no
longer be prefixed with "experimental-".
Differential Revision: https://reviews.llvm.org/D117131
MSVC currently doesn't support 80 bits long double. ICC supports it when
the option `/Qlong-double` is specified. Changing the alignment of f80
to 16 bytes so that we can be compatible with ICC's option.
Reviewed By: rnk, craig.topper
Differential Revision: https://reviews.llvm.org/D115942
The code generation for the UBSan VLA size check was qualified by a con-
dition that the parameter must be a signed integer, however the C spec
does not make any distinction that only signed integer parameters can be
used to declare a VLA, only qualifying that it must be greater than zero
if it is not a constant.
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D116048
I noticed that the following case would compile in Clang but not GCC:
void *x(void) {
void *p = &&foo;
asm goto ("# %0\n\t# %l1":"+r"(p):::foo);
foo:;
return p;
}
Changing the output template above from %l2 would compile in GCC but not
Clang.
This demonstrates that when using tied outputs (say via the "+r" output
constraint), the hidden inputs occur or are numbered BEFORE the labels,
at least with GCC.
In fact, GCC does denote this in its documentation:
https://gcc.gnu.org/onlinedocs/gcc-11.2.0/gcc/Extended-Asm.html#Goto-Labels
> Output operand with constraint modifier ‘+’ is counted as two operands
> because it is considered as one output and one input operand.
For the sake of compatibility, I think it's worthwhile to just make this
change.
It's better to use symbolic names for compatibility (especially now
between released version of Clang that support asm goto with outputs).
ie. %l1 from the above would be %l[foo]. The GCC docs also make this
recommendation.
Also, I cleaned up some cruft in GCCAsmStmt::getNamedOperand. AFAICT,
NumPlusOperands was no longer used, though I couldn't find which commit
didn't clean that up correctly.
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98096
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103640
Link: https://gcc.gnu.org/onlinedocs/gcc-11.2.0/gcc/Extended-Asm.html#Goto-Labels
Reviewed By: void
Differential Revision: https://reviews.llvm.org/D115471
Alternative to D116817.
This introduces a new value-based folding interface for Or (FoldOr),
which takes 2 values and returns an existing Value or a constant if the
Or can be simplified. Otherwise nullptr is returned. This replaces the
more restrictive CreateOr which takes 2 constants.
This is the used to implement a folder that uses InstructionSimplify.
The logic to simplify `Or` instructions is moved there. Subsequent
patches are going to transition other CreateXXX to the more general
FoldXXX interface.
Reviewed By: nikic, lebedev.ri
Differential Revision: https://reviews.llvm.org/D116935
SROA has 3 data-structures where it stores sets of instructions that should
be deleted:
- DeadUsers -> instructions that are UB or have no users
- DeadOperands -> instructions that are UB or operands of useless phis
- DeadInsts -> "dead" instructions, including loads of uninitialized memory
with users
The first 2 sets can be RAUW with poison instead of undef. No brainer as UB
can be replaced with poison, and for instructions with no users RAUW is a
NOP.
The 3rd case cannot be currently replaced with poison because the set mixes
the loads of uninit memory. I leave that alone for now.
Another case where we can use poison is in the construction of vectors from
multiple loads. The base vector for the first insertelement is now poison as
it doesn't matter as it is fully overwritten by inserts.
Differential Revision: https://reviews.llvm.org/D116887
This implements the clang side of D116531. The elementtype
attribute is added for all indirect constraints (*) and tests are
updated accordingly.
Differential Revision: https://reviews.llvm.org/D116666
This reverts commit 640beb38e7.
That commit caused performance degradtion in Quicksilver test QS:sGPU and a functional test failure in (rocPRIM rocprim.device_segmented_radix_sort).
Reverting until we have a better solution to s_cselect_b64 codegen cleanup
Change-Id: Ibf8e397df94001f248fba609f072088a46abae08
Reviewed By: kzhuravl
Differential Revision: https://reviews.llvm.org/D115960
Change-Id: Id169459ce4dfffa857d5645a0af50b0063ce1105
stdcall name mangling requires a suffix with the number equal to the sum of the byte count of all parameter types. In the case of a function prototype that has a parameter type of an incomplete type it is impossible to get the size of the type. While such a function is not callable or able to be defined in the TU, it may still be mangled when generating debug info, which would previously lead to a crash.
This patch fixes that by simply bailing out of the loop and using the so far accumulated byte count. This matches GCCs behaviour as well: bc8d6c6013/gcc/config/i386/winnt.c (L203)
Fixes https://github.com/llvm/llvm-project/issues/52782
Differential Revision: https://reviews.llvm.org/D116020
In particular, this also preserves undef when loading from padding,
rather than converting it to zero through a different codepath.
This is the remaining part of D115924.
This builds on the code from D114963, and extends it to handle calls both direct and indirect. With the revised code structure (from series of previously landed NFCs), this is pretty straight forward.
One thing to note is that we can not infer writeonly for arguments which might be captured. If the pointer can be read back by the caller, and then read through, we have no way to track that. This is the same restriction we have for readonly, except that we get no mileage out of the "callee can be readonly" exception since a writeonly param on a readonly function is either a) readnone or b) UB. This means we can't actually infer much unless nocapture has already been inferred.
Differential Revision: https://reviews.llvm.org/D115003
The fold for merging a GEP of GEP into a single GEP currently bails
if doing so would result in notional overindexing. The justification
given in the comment above this check is dangerously incorrect: GEPs
with notional overindexing are perfectly fine, and if some code
treats them incorrectly, then that code is broken, not the GEP.
Such a GEP might legally appear in source IR, so only preventing
its creation cannot be sufficient. (The constant folder also ends
up canonicalizing the GEP to remove the notional overindexing, but
that's neither here nor there.)
This check dates back to
bd4fef4a89,
and as far as I can tell the original issue this was trying to
patch around has since been resolved.
Differential Revision: https://reviews.llvm.org/D116587
The run line of stack-protector-guard.c doesn't specify the triple,
which means it depends on the platform running the test. This makes
some failure hidden.
Reviewed By: nickdesaulniers
Differential Revision: https://reviews.llvm.org/D116003
This reverts commit a954558e87.
Thanks Yuanfang's help. I think I found the root cause of the buildbot
fail.
The failed test has both Memory and Immediate X86Operand. All data of
different operand kinds share the same memory space by a union
definition. So it has chance we get the wrong result if we don't check
the operand kind.
It's probably it happen to be the correct value in my local environment
so that I can't reproduce the fail.
Differential Revision: https://reviews.llvm.org/D116090
D115225 tried to roll back the effects on symbols of MS inline asm
introduced by D113096. But the combination of the conditions cannot
match all the changes. As a result, there are still fails after the
patch.
This patch fixes the problem by checking the exact conditions for MS
global variables, i.e., variable (by FrontendSize != 0) + non rip/eip
(by DefaultBaseReg == 0), so that we can fully roll back for D113096.
Reviewed By: skan
Differential Revision: https://reviews.llvm.org/D116090
HVX does not have load/store instructions for vector predicates (i.e. bool
vectors). Because of that, vector predicates need to be converted to another
type before being stored, and the most convenient representation is an HVX
vector.
As a consequence, in C/C++, source-level builtins that either take or
produce vector predicates take or return regular vectors instead. On the
other hand, the corresponding LLVM intrinsics do have boolean types that,
and so a conversion of the operand or the return value was necessary.
This conversion would happen inside clang's codegen, but was somewhat
fragile.
This patch changes the strategy: a builtin that takes a vector predicate
now really expects a vector predicate. Since such a predicate cannot be
provided via a variable, this builtin must be composed with other builtins
that either convert vector to a predicate (V6_vandvrt) or predicate to a
vector (V6_vandqrt).
For users using builtins defined in hvx_hexagon_protos.h there is no impact:
the conversions were added to that file. Other users will need to insert
- __builtin_HEXAGON_V6_vandvrt[_128B](V, -1) to convert vector V to a
vector predicate, or
- __builtin_HEXAGON_V6_vandqrt[_128B](Q, -1) to convert vector predicate Q
to a vector.
Builtins __builtin_HEXAGON_V6_vmaskedstore.* are a temporary exception to
that, but they are deprecated and should not be used anyway. In the future
they will either follow the same rule, or be removed.
This patch implements __builtin_reduce_xor as specified in D111529.
Reviewed By: fhahn, aaron.ballman
Differential Revision: https://reviews.llvm.org/D115231
Control-Flow Integrity (CFI) replaces references to address-taken
functions with pointers to the CFI jump table. This is a problem
for low-level code, such as operating system kernels, which may
need the address of an actual function body without the jump table
indirection.
This change adds the __builtin_function_start() builtin, which
accepts an argument that can be constant-evaluated to a function,
and returns the address of the function body.
Link: https://github.com/ClangBuiltLinux/linux/issues/1353
Depends on D108478
Reviewed By: pcc, rjmccall
Differential Revision: https://reviews.llvm.org/D108479
The UpperBound of RVV type in debug info should be elements count minus one,
as the LowerBound start from zero.
Reviewed By: HsiangKai
Differential Revision: https://reviews.llvm.org/D115430
This is the last cleanup step resulting from D115804 .
Now that clang uses intrinsics when we're in the special FP mode,
we don't need a function attribute as an indicator to the backend.
The LLVM part of the change is in D115885.
Differential Revision: https://reviews.llvm.org/D115886
This reverts commit 9fd4f80e33.
This breaks SingleSource/Regression/C/gcc-c-torture/execute/pr19687.c
in test-suite. Either the test is incorrect, or clang is generating
incorrect union initialization code. I've submitted
https://reviews.llvm.org/D115994 to fix the test, assuming my
interpretation is correct. Reverting this in the meantime as it
may take some time to resolve.
We were being wildly inconsistent about what memory access was implied by an indirect function call. Depending on the call site attributes, you could get anything from a read, to unknown, to none at all. (The last was a miscompile.)
We were also always traversing the uses of a readonly indirect call. This is entirely unneeded as the indirect call does not capture. The callee might capture itself internally, but that has no implications for this caller. (See the nice explanation in the CaptureTracking comments if that case is confusing.)
Note that elsewhere in the same file, we were correctly computing the nocapture attribute for indirect calls. The changed case only resulted in conservatism when computing memory attributes if say the return value was written to.
Differential Revision: https://reviews.llvm.org/D115916
There are a number of places that specially handle loads from a
uniform value where all the bits are the same (zero, one, undef,
poison), because we a) don't care about the load offset in that
case and b) it bypasses casts that might not be legal generally
but do work with uniform values.
We had multiple implementations of this, with a different set of
supported values each time, as well as incomplete type checks in
some cases. In particular, this fixes the assertion reported in
https://reviews.llvm.org/D114889#3198921, as well as a similar
assertion that could be triggered via constant folding.
Differential Revision: https://reviews.llvm.org/D115924
This supports bitcode compilation using `clang -fwasm-exceptions`.
---
The current situation:
Currently the backend requires two options for Wasm EH:
`-wasm-enable-eh` and `-exception-model=wasm`. Wasm SjLj requires two
options as well: `-wasm-enable-sjlj` and `-exception-model=wasm`. When
using Wasm EH via Emscripten, you only need to pass `-fwasm-exceptions`,
and these options will be added within the clang driver. This
description will focus on the case of Wasm EH going forward, but Wasm
SjLj's case is similar.
When you pass `-fwasm-exceptions` to emcc and clang driver, the clang
driver adds these options to the command line that calls the clang
frontend (`clang -cc1`): `-mllvm -wasm-enable-eh` and
`-exception-model=wasm`. `-wasm-enable-eh` is prefixed with `-mllvm`, so
it is passed as is to the backend. But `-exception-model` is parsed and
processed within the clang frontend and stored in `LangOptions` class.
This info is later transferred to `TargetOptions` class, and then
eventually passed to `MCAsmInfo` class. All LLVM code queries this
`MCAsmInfo` to get the exception model.
---
Problem:
The problem is the whole `LangOptions` processing is bypassed when
compiling bitcode, so the information transfer of `LangOptions` ->
`TargetOptions` -> `MCAsmInfo` does not happen. They are all set to
`ExceptionHandling::None`, which is the default value.
---
What other targets do, and why we can't do the same:
Other targets support bitcode compilation by the clang driver, but they
can do that by using different triples. For example, X86 target supports
multiple triples, each of which has its own subclass of `MCAsmInfo`, so
it can hardcode the appropriate exception model within those subclasses'
constructors. But we don't have separate triples for each exception
mode: none, emscripten, and wasm.
---
What this CL does:
If we can figure out whether `-wasm-enable-eh` is passed to the backend,
we can programatically set the exception model from the backend, rather
than requiring it to be passed.
So we check `WasmEnableEH` and `WasmEnableSjLj` variables, which are
`cl::opt` for `-wasm-enable-eh` and `-wasm-enable-sjlj`, in
`WebAssemblyMCAsmInfo` constructor, and if either of them is set, we set
`MCAsmInfo.ExceptionType` to Wasm. `TargetOptions` cannot be updated
there, so we make sure they are the same later.
Fixes https://github.com/emscripten-core/emscripten/issues/15712.
Reviewed By: dschuff
Differential Revision: https://reviews.llvm.org/D115893
We got an unintended consequence of the optimizer getting smarter when
compiling in a non-standard mode, and there's no good way to inhibit
those optimizations at a later stage. The test is based on an example
linked from D92270.
We allow the "no-strict-float-cast-overflow" exception to normal C
cast rules to preserve legacy code that does not expect overflowing
casts from FP to int to produce UB. See D46236 for details.
Differential Revision: https://reviews.llvm.org/D115804
In 32bit mode, attaching TBAA metadata to the store following the call
to inline assembler results in describing the wrong type by making a
fake lvalue(i.e., whatever the inline assembler happens to leave in
EAX:EDX.) Even if inline assembler somehow describes the correct type,
setting TBAA information on return type of call to inline assembler is
likely not correct, since TBAA rules need not apply to inline assembler.
Differential Revision: https://reviews.llvm.org/D115320
~(iN X s>> (N-1)) & Y --> (X s< 0) ? 0 : Y
https://alive2.llvm.org/ce/z/JKlQ9x
This is similar to D111410 / 727e642e97 ,
but it includes a 'not' of the signbit and so it
saves an instruction in the basic pattern.
DAGCombiner or target-specific folds can expand
this back into bit-hacks.
The diffs in the logical-select tests are not true
regressions - running early-cse and another round
of instcombine is expected in a normal opt pipeline,
and that reduces back to a minimal form as shown
in the duplicated PhaseOrdering test.
I have no understanding of the SystemZ diffs, so
I made the minimal edits suggested by FileCheck to
make that test pass again. That whole test file is
wrong though. It is running the entire optimizer (-O2)
to check IR, and then topping that by even running
codegen and checking asm. It needs to be split up.
Fixes#52631
The basic idea to this is that a) having a single canonical type makes CSE easier, and b) many of our transforms are inconsistent about which types we end up with based on visit order.
I'm restricting this to constants as for non-constants, we'd have to decide whether the simplicity was worth extra instructions. For constants, there are no extra instructions.
We chose the canonical type as i64 arbitrarily. We might consider changing this to something else in the future if we have cause.
Differential Revision: https://reviews.llvm.org/D115387
This reverts commit 2b554920f1.
This change causes tsan test timeout on x86_64-linux-autoconf.
The timeout can be reproduced by:
git clone https://github.com/llvm/llvm-zorg.git
BUILDBOT_CLOBBER= BUILDBOT_REVISION=eef8f3f85679c5b1ae725bade1c23ab7bb6b924f llvm-zorg/zorg/buildbot/builders/sanitizers/buildbot_standard.sh
Every generated IR has a corresponding target-abi value, so
encoding a non-empty value would improve the robustness and
correctness.
Reviewed By: asb, jrtc27, arichardson
Differential Revision: https://reviews.llvm.org/D105555
Originially there are two places that does parsing - `parseArchString` and
`parseFeatures`, each with its code on dependency check and implication.
This patch extracts common parts of the two as functions of `RISCVISAInfo`
and let them 2 use it.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D112359
D113096 solved the "undefined reference to xxx" issue by adding
constraint *m for the global var. But it has strong side effect due to
the symbol in the assembly being replaced with constraint variable.
This leads to some lowering fails. https://godbolt.org/z/h3nWoerPe
This patch fix the problem by use the constraint *m as place holder
rather than real constraint. It has negligible effect for the existing
code generation.
Reviewed By: skan
Differential Revision: https://reviews.llvm.org/D115225
The default for min is changed to 1. The behaviour of -mvscale-{min,max}
in Clang is also changed such that 16 is the max vscale when targeting
SVE and no max is specified.
Reviewed By: sdesmalen, paulwalker-arm
Differential Revision: https://reviews.llvm.org/D113294
WG14 adopted the _ExtInt feature from Clang for C23, but renamed the
type to be _BitInt. This patch does the vast majority of the work to
rename _ExtInt to _BitInt, which accounts for most of its size. The new
type is exposed in older C modes and all C++ modes as a conforming
extension. However, there are functional changes worth calling out:
* Deprecates _ExtInt with a fix-it to help users migrate to _BitInt.
* Updates the mangling for the type.
* Updates the documentation and adds a release note to warn users what
is going on.
* Adds new diagnostics for use of _BitInt to call out when it's used as
a Clang extension or as a pre-C23 compatibility concern.
* Adds new tests for the new diagnostic behaviors.
I want to call out the ABI break specifically. We do not believe that
this break will cause a significant imposition for early adopters of
the feature, and so this is being done as a full break. If it turns out
there are critical uses where recompilation is not an option for some
reason, we can consider using ABI tags to ease the transition.
When targeting FreeBSD on a Linux host with a copy
of system libc++, Clang prepends /usr/include/c++/v1
to the search paths even with -ffreestanding, and
fails to compile a program with a
single #include <xmmintrin.h>
Dropping the path with -nostdlibinc.
Differential Revision: https://reviews.llvm.org/D114497
Swap AIC and IC neighbouring in pipeline. This looks more natural and even
almost has no effect for now (three slightly touched tests of test-suite). Also
this could be the first step towards merging AIC (or its part) to -O2 pipeline.
After several changes in AIC (like D108091, D108201, D107766, D109515, D109236)
there've been observed several regressions (like PR52078, PR52253, PR52289)
that were fixed in different passes (see D111330, D112721) by extending their
functionality, but these regressions were exposed since changed AIC prevents IC
from making some of early optimizations.
This is common problem and it should be fixed by just moving AIC after IC
which looks more logically by itself: make aggressive instruction combining
only after failed ordinary one.
Fixes PR52289
Reviewed By: spatel, RKSimon
Differential Revision: https://reviews.llvm.org/D113179
This fixes a bug in 740057d. There's two ways to describe the issue:
* One caller hasn't yet proven nocapture on the argument. Given that, the inference routine is responsible for bailing out on a potential capture.
* Even if we know the argument is nocapture, the access inference needs to traverse the exact set of users the capture tracking would (or exit conservatively). Even if capture tracking can prove a store is non-capturing (e.g. to a local alloc which doesn't escape), we still need to track the copy of the pointer to see if it's later reloaded and accessed again.
Note that all the test changes except the newly added ones appear to be false negatives. That is, cases where we could prove writeonly, but the current code isn't strong enough. That's why I didn't spot this originally.
This adjusts all the MVE and CDE intrinsics now that v2i1 is a legal
type, to use a <2 x i1> as opposed to emulating the predicate with a
<4 x i1>. The v4i1 workarounds have been removed leaving the natural
v2i1 types, notably in vctp64 which now generates a v2i1 type.
AutoUpgrade code has been added to upgrade old IR, which needs to
convert the old v4i1 to a v2i1 be converting it back and forth to an
integer with arm.mve.v2i and arm.mve.i2v intrinsics. These should be
optimized away in the final assembly.
Differential Revision: https://reviews.llvm.org/D114455
This does similar thing to 6b1341e, but fixes single element 128-bit
float type: `struct { long double x; }`.
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D114937
Glibc 2.32 and newer uses these symbol names to support IEEE-754 128-bit
float. GCC transforms name of these builtins to align with Glibc header
behavior.
Since Clang doesn't have all GCC-compatible builtins implemented, this
patch only mutates the implemented part.
Note nexttoward is a special case (no nexttowardf128) so it's also
handled here.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D112401
This change extends the current logic for inferring readonly and readnone argument attributes to also infer writeonly.
This change is deliberately minimal; there's a couple of areas for follow up.
* I left out all call handling and thus any benefit from the SCC walk. When examining the test changes, I realized the existing code is imprecise, and am going to fix that in it's own revision before adding in the writeonly handling. (Mostly because updating the tests is hard when I, the human, can't figure out whether the result is correct.)
* I left out handling for storing a value (as opposed to storing to a pointer). This should benefit readonly/readnone as well, and applies to a bunch of other instructions. Seemed worth having as a separate review.
Differential Revision: https://reviews.llvm.org/D114963
If clang's output is set to bitcode and LTO is enabled, clang would
unconditionally add the flag to the module. Unfortunately, if the input were a
bitcode or IR file and had the flag set, this would result in two copies of the
flag, which is illegal IR. Guard the setting of the flag by checking whether it
already exists. This follows existing practice for the related "ThinLTO" module
flag.
Differential Revision: https://reviews.llvm.org/D112177
Many of the SVE ACLE tests have gained entries as follows:
REQUIRES: aarch64-registered-target || arm-registered-target
which can cause test failures when only arm-registered-target is
available because only aarch64-registered-target supports SVE.
Handle branch protection option on the commandline as well as a function
attribute. One patch for both mechanisms, as they use the same underlying
parsing mechanism.
These are recorded in a set of LLVM IR module-level attributes like we do for
AArch64 PAC/BTI (see https://reviews.llvm.org/D85649):
- command-line options are "translated" to module-level LLVM IR
attributes (metadata).
- functions have PAC/BTI specific attributes iff the
__attribute__((target("branch-protection=...))) was used in the function
declaration.
- command-line option -mbranch-protection to armclang targeting Arm,
following this grammar:
branch-protection ::= "-mbranch-protection=" <protection>
protection ::= "none" | "standard" | "bti" [ "+" <pac-ret-clause> ]
| <pac-ret-clause> [ "+" "bti"]
pac-ret-clause ::= "pac-ret" [ "+" <pac-ret-option> ]
pac-ret-option ::= "leaf" ["+" "b-key"] | "b-key" ["+" "leaf"]
b-key is simply a placeholder to make it consistent with AArch64's
version. In Arm, however, it triggers a warning informing that b-key is
unsupported and a-key will be selected instead.
- Handle _attribute_((target(("branch-protection=..."))) for AArch32 with the
same grammer as the commandline options.
This patch is part of a series that adds support for the PACBTI-M extension of
the Armv8.1-M architecture, as detailed here:
https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/armv8-1-m-pointer-authentication-and-branch-target-identification-extension
The PACBTI-M specification can be found in the Armv8-M Architecture Reference
Manual:
https://developer.arm.com/documentation/ddi0553/latest
The following people contributed to this patch:
- Momchil Velikov
- Victor Campos
- Ties Stuij
Reviewed By: vhscampos
Differential Revision: https://reviews.llvm.org/D112421
A big-endian version of vpermxor, named vpermxor_be, is added to LLVM
and Clang. vpermxor_be can be called directly on both the little-endian
and the big-endian platforms.
Reviewed By: nemanjai
Differential Revision: https://reviews.llvm.org/D114540
See discussion in D51650, this change was a little aggressive in an
error while doing a 'while we were here', so this removes that error
condition, as it is apparently useful.
This reverts commit bb4934601d.
The clang portion of c933c2eb33 was missed as I made
some kind of mistake squashing the commits with git.
This patch just adds those.
The original review: https://reviews.llvm.org/D114088
The XL implementation of vec_round for vector double uses
"round-to-nearest, ties to even" just as the vector float
`version does. However clang and gcc use "round-to-nearest-away"
for vector double and "round-to-nearest, ties to even"
for vector float.
The XL behaviour is implemented under the __XL_COMPAT_ALTIVEC__
macro similarly to other instances of incompatibility.
Differential revision: https://reviews.llvm.org/D113642
This enables Intel intrinsics support on FreeBSD.
Thanks to @pkubaj who noticed this feature was missing
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D113451
The `llvm.instrprof.increment` intrinsic uses `i32` for the index. We should use this same type for the index into the GEP instructions.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D114268
Operations are emulated by software emulation and “float” instructions.
This patch is allowing the support of _Float16 type without the use of
-max512fp16 flag. The final goal being, perform _Float16 emulation for
all arithmetic expressions.
AMD64 ABI mandates caller to specify the number of used SSE registers
when passing variable arguments.
GCC also provides option -mskip-rax-setup to skip the setup of rax when
SSE is disabled. This helps to reduce the code size, see pr23258.
Reviewed By: nickdesaulniers
Differential Revision: https://reviews.llvm.org/D112413
With this,
void f() { __asm__("mov eax, ebx"); }
now compiles with clang with -masm=intel.
This matches gcc.
The flag is not accepted in clang-cl mode. It has no effect on
MSVC-style `__asm {}` blocks, which are unconditionally in intel
mode both before and after this change.
One difference to gcc is that in clang, inline asm strings are
"local" while they're "global" in gcc. Building the following with
-masm=intel works with clang, but not with gcc where the ".att_syntax"
from the 2nd __asm__() is in effect until file end (or until a
".intel_syntax" somewhere later in the file):
__asm__("mov eax, ebx");
__asm__(".att_syntax\nmovl %ebx, %eax");
__asm__("mov eax, ebx");
This also updates clang's intrinsic headers to work both in
-masm=att (the default) and -masm=intel modes.
The official solution for this according to "Multiple assembler dialects in asm
templates" in gcc docs->Extensions->Inline Assembly->Extended Asm
is to write every inline asm snippet twice:
bt{l %[Offset],%[Base] | %[Base],%[Offset]}
This works in LLVM after D113932 and D113894, so use that.
(Just putting `.att_syntax` at the start of the snippet works in some but not
all cases: When LLVM interpolates in parameters like `%0`, it uses at&t or
intel syntax according to the inline asm snippet's flavor, so the `.att_syntax`
within the snippet happens to late: The interpolated-in parameter is already
in intel style, and then won't parse in the switched `.att_syntax`.)
It might be nice to invent a `#pragma clang asm_dialect push "att"` /
`#pragma clang asm_dialect pop` to be able to force asm style per snippet,
so that the inline asm string doesn't contain the same code in two variants,
but let's leave that for a follow-up.
Fixes PR21401 and PR20241.
Differential Revision: https://reviews.llvm.org/D113707
InstCombine AArch64 LD1/ST1 to llvm.masked.load/llvm.masked.store
and LD1/ST1 to load/store when a ptrue all predicate pattern operand
is present.
This allows existing IR optimizations such as dead-load removal to
occur.
Differential Revision: https://reviews.llvm.org/D113489
Previously, any change in any function in an SCC would cause all
analyses for all functions in the SCC to be invalidated. With this
change, we now manually invalidate analyses for functions we modify,
then let the pass manager know that all function analyses should be
preserved since we've already handled function analysis invalidation.
So far this only touches the inliner, argpromotion, function-attrs, and
updateCGAndAnalysisManager(), since they are the most used.
This is part of an effort to investigate running the function
simplification pipeline less on functions we visit multiple times in the
inliner pipeline.
However, this causes major memory regressions especially on larger IR.
To counteract this, turn on the option to eagerly invalidate function
analyses. This invalidates analyses on functions immediately after
they're processed in a module or scc to function adaptor for specific
parts of the pipeline.
Within an SCC, if a pass only modifies one function, other functions in
the SCC do not have their analyses invalidated, so in later function
passes in the SCC pass manager the analyses may still be cached. It is
only after the function passes that the eager invalidation takes effect.
For the default pipelines this makes sense because the inliner pipeline
runs the function simplification pipeline after all other SCC passes
(except CoroSplit which doesn't request any analyses).
Overall this has mostly positive effects on compile time and positive effects on memory usage.
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=instructionshttps://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=max-rss
D113196 shows that we slightly regressed compile times in exchange for
some memory improvements when turning on eager invalidation. D100917
shows that we slightly improved compile times in exchange for major
memory regressions in some cases when invalidating less in SCC passes.
Turning these on at the same time keeps the memory improvements while
keeping compile times neutral/slightly positive.
Reviewed By: asbirlea, nikic
Differential Revision: https://reviews.llvm.org/D113304
c17d9b4b12 added REQUIRES lines to a lot of Arm and AArch64
test, but added them to the very beginning, before the existing
update_cc_test_checks lines. This just moves them later so as to not
mess up the existing ordering when the checks are regenerated.
Change the error message to use ignorelist, and changed some variable and function
names in related code and test.
Reviewed By: thakis
Differential Revision: https://reviews.llvm.org/D113189
As discussed here: https://lwn.net/Articles/691932/
GCC6.0 adds target_clones multiversioning. This functionality is
an odd cross between the cpu_dispatch and 'target' MV, but is compatible
with neither.
This attribute allows you to list all options, then emits a separately
optimized version of each function per-option (similar to the
cpu_specific attribute). It automatically generates a resolver, just
like the other two.
The mangling however, is... ODD to say the least. The mangling format
is:
<normal_mangling>.<option string>.<option ordinal>.
Differential Revision:https://reviews.llvm.org/D51650
If the feature is on the command line we should honor it for all
functions. I don't think we could reliably target a single function
for a less capable processor than what the rest of the program is
compiled for.
Fixes PR52407.
Reviewed By: erichkeane
Differential Revision: https://reviews.llvm.org/D113647
When creating a splat of 0 for scalable vectors we tend to create them
with using a combination of shufflevector and insertelement, i.e.
shufflevector (<vscale x 4 x i32> insertelement (<vscale x 4 x i32> poison, i32 0, i32 0),
<vscale x 4 x i32> poison, <vscale x 4 x i32> zeroinitializer)
However, for the case of a zero splat we can actually just replace the
above with zeroinitializer instead. This makes the IR a lot simpler and
easier to read. I have changed ConstantFoldShuffleVectorInstruction to
use zeroinitializer when creating a splat of integer 0 or FP +0.0 values.
Differential Revision: https://reviews.llvm.org/D113394
This splits out the generated headers and conditonalises them upon the
target being enabled.
The motivation here is that the RISCV header alone added 10MB to the
resource directory, which was previously at 10MB, increasing the build
size and time. This header is contributing ~50% of the size of the
resource headers (~10MB).
The ARM generated headers are contributing about ~10% or 1MB.
This could be extended further adding only the static resource headers
for the targets that the LLVM build supports.
The changes to the tests for ARM mirror what the RISCV target already
did and rnk identified as a possible issue.
Testing:
cmake -G Ninja -D LLVM_TARGETS_TO_BUILD=X86 -D LLVM_ENABLE_PROJECTS="clang;lld" ../clang
ninja check-clang
Differential Revision: https://reviews.llvm.org/D112890
Reviewed By: craig.topper
Previously, the Backend_Emit{Nothing,BC,LL} modes did
not run the LLVM verifier since it is usually added via
the TargetMachine::addPassesToEmitFile method according
to the DisableVerify parameter. This is called from
EmitAssemblyHelper::AddEmitPasses, which is only relevant
for BackendAction-s that require CodeGen.
Note:
* In these particular situations the verifier is added
to the optimization pipeline rather than the codegen
pipeline so that it runs prior to the BC/LL emission
pass.
* This change applies to both the old and the new PMs.
* Because the clang tests use -emit-llvm ubiquitously,
this change will enable the verifier for them.
* A small bug is fixed in emitIFuncDefinition so that
the clang/test/CodeGen/ifunc.c test would pass:
the emitIFuncDefinition incorrectly passed the
GlobalDecl of the IFunc itself to the call to
GetOrCreateLLVMFunction for creating the resolver.
Signed-off-by: Itay Bookstein <ibookstein@gmail.com>
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D113352
To be more consistent with other pass struct names.
There are still more passes that don't end with "Pass", but these are the important ones.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D112935
- CUDA cannot associate memory space with pointer types. Even though Clang could add extra attributes to specify the address space explicitly on a pointer type, it breaks the portability between Clang and NVCC.
- This change proposes to assume the address space from a pointer from the assumption built upon target-specific address space predicates, such as `__isGlobal` from CUDA. E.g.,
```
foo(float *p) {
__builtin_assume(__isGlobal(p));
// From there, we could assume p is a global pointer instead of a
// generic one.
}
```
This makes the code portable without introducing the implementation-specific features.
Note that NVCC starts to support __builtin_assume from version 11.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D112041
Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169
[Clang/Test]: Rename enable_noundef_analysis to disable-noundef-analysis and turn it off by default (2)
This patch updates test files after D105169.
Autogenerated test codes are changed by `utils/update_cc_test_checks.py,` and non-autogenerated test codes are changed as follows:
(1) I wrote a python script that (partially) updates the tests using regex: {F18594904} The script is not perfect, but I believe it gives hints about which patterns are updated to have `noundef` attached.
(2) The remaining tests are updated manually.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D108453
Resolve lit failures in clang after 8ca4b3e's land
Fix lit test failures in clang-ppc* and clang-x64-windows-msvc
Fix missing failures in clang-ppc64be* and retry fixing clang-x64-windows-msvc
Fix internal_clone(aarch64) inline assembly
Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169
The fix for PR52382 was already introduced in D112732 which ensures that module
instrumentation always runs after function instrumentation. This adds a test
that ensures the PR is addressed and prevent regression.
Differential Revision: https://reviews.llvm.org/D113143
This is to revert commit f95bd18b5f (Revert "[Attr] support
btf_type_tag attribute") plus a bug fix.
Previous change failed to handle cases like below:
$ cat reduced.c
void a(*);
void a() {}
$ clang -c reduced.c -O2 -g
In such cases, during clang IR generation, for function a(),
CGCodeGen has numParams = 1 for FunctionType. But for
FunctionTypeLoc we have FuncTypeLoc.NumParams = 0. By using
FunctionType.numParams as the bound to access FuncTypeLoc
params, a random crash is triggered. The bug fix is to
check against FuncTypeLoc.NumParams before accessing
FuncTypeLoc.getParam(Idx).
Differential Revision: https://reviews.llvm.org/D111199
This reverts commits 737e4216c5 and
ce7ac9e66a.
After those commits, the compiler can crash with a reduced
testcase like this:
$ cat reduced.c
void a(*);
void a() {}
$ clang -c reduced.c -O2 -g
Constraint `*m` should be used when the address of a variable is passed
as a value. And the constraint is missing for MS inline assembly when sth
is written to the address of the variable.
The missing would cause FE delete the definition of the static varible,
and then result in "undefined reference to xxx" issue.
Reviewed By: xiangzhangllvm
Differential Revision: https://reviews.llvm.org/D113096
This patch added clang codegen and llvm support
for btf_type_tag support. Currently, btf_type_tag
attribute info is preserved in DebugInfo IR only for
pointer types associated with typedef, global variable
and function declaration. Eventually, such information
is emitted to dwarf.
The following is an example:
$ cat test.c
#define __tag __attribute__((btf_type_tag("tag")))
int __tag *g;
$ clang -O2 -g -c test.c
$ llvm-dwarfdump --debug-info test.o
...
0x0000001e: DW_TAG_variable
DW_AT_name ("g")
DW_AT_type (0x00000033 "int *")
DW_AT_external (true)
DW_AT_decl_file ("/home/yhs/test.c")
DW_AT_decl_line (2)
DW_AT_location (DW_OP_addr 0x0)
0x00000033: DW_TAG_pointer_type
DW_AT_type (0x00000042 "int")
0x00000038: DW_TAG_LLVM_annotation
DW_AT_name ("btf_type_tag")
DW_AT_const_value ("tag")
0x00000041: NULL
0x00000042: DW_TAG_base_type
DW_AT_name ("int")
DW_AT_encoding (DW_ATE_signed)
DW_AT_byte_size (0x04)
0x00000049: NULL
Basically, a DW_TAG_LLVM_annotation tag will be inserted
under DW_TAG_pointer_type tag if that pointer has a btf_type_tag
associated with it.
Differential Revision: https://reviews.llvm.org/D111199
Implement two builtins to pack/unpack IBM extended long double float,
according to GCC 'Basic PowerPC Builtin Functions Available ISA 2.05'.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D112055
I recently renamed some tests from vfredsum to vfredusum without
noticing they tested both ordered and unordered reductions. This
patch renames them back.
I've also merged test files for vfwredosum/vfwredusum into a single
file for consistency with the other 3 floating point reduction test
files. The inconsistency is what caused my original confusion.
This patch implements __builtin_reduce_max and __builtin_reduce_min as
specified in D111529.
The order of operations does not matter for min or max reductions and
they can be directly lowered to the corresponding
llvm.vector.reduce.{fmin,fmax,umin,umax,smin,smax} intrinsic calls.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D112001
Basically, inline builtin definition are shadowed by externally visible
redefinition. This matches GCC behavior.
The implementation has to workaround the fact that:
1. inline builtin are renamed at callsite during codegen, but
2. they may be shadowed by a later external definition
As a consequence, during codegen, we need to walk redecls and eventually rewrite
some call sites, which is totally inelegant.
Differential Revision: https://reviews.llvm.org/D112059
When D105690 changed the mnemonic from vf(w)redsum to vf(w)redusum,
several tests were deleted instead of being renamed.
This commit also consistently renames the other tests that weren't
deleted.
Verify that the resolver exists, that it is a defined
Function, and that its return type matches the ifunc's
type. Add corresponding check to BitcodeReader, change
clang to emit the correct type, and fix tests to comply.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D112349
This change fixes a crash where a NULL fd was used to emit a diagnostic.
Instead of crashing, just avoid printing the declaration name when there's no
associated function declaration.
Differential Revision: https://reviews.llvm.org/D109402
Add i32x4.relaxed_trunc_f32x4_s, i32x4.relaxed_trunc_f32x4_u,
i32x4.relaxed_trunc_f64x2_s_zero, i32x4.relaxed_trunc_f64x2_u_zero.
These are only exposed as builtins, and require user opt-in.
Differential Revision: https://reviews.llvm.org/D112186
Upon further investigation and discussion,
this is actually the opposite direction from what we should be taking,
and this direction wouldn't solve the motivational problem anyway.
Additionally, some more (polly) tests have escaped being updated.
So, let's just take a step back here.
This reverts commit f3190dedee.
This reverts commit 749581d21f.
This reverts commit f3df87d57e.
This reverts commit ab1dbcecd6.
This patch implements __builtin_elementwise_abs as specified in
D111529.
Reviewed By: aaron.ballman, scanon
Differential Revision: https://reviews.llvm.org/D111986
There's precedent for that in `CreateOr()`/`CreateAnd()`.
The motivation here is to avoid bloating the run-time check's IR
in `SCEVExpander::generateOverflowCheck()`.
Refs. https://reviews.llvm.org/D109368#3089809
While we could emit such a tautological `select`,
it will stick around until the next instsimplify invocation,
which may happen after we count the cost of this redundant `select`.
Which is precisely what happens with loop vectorization legality checks,
and that artificially increases the cost of said checks,
which is bad.
There is prior art for this in `IRBuilderBase::CreateAnd()`/`IRBuilderBase::CreateOr()`.
Refs. https://reviews.llvm.org/D109368#3089809
This patch implements __builtin_elementwise_max and
__builtin_elementwise_min, as specified in D111529.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D111985
All but 2 of the vector builtins are only used by clang_builtin_alias.
When using clang_builtin_alias, the type string of the builtin is never
checked. Only the types in the function definition used for the alias
are checked.
This patch takes advantage of this to share a single builtin for
many different types. We already used type overloads on the IR intrinsic
so the codegen for the builtins that are being merge were already
the same. This extends the type overloading to the builtins.
I had to make a few tweaks to make this work.
-Floating point vector-vector vmerge now uses the vmerge intrinsic
instead of the vfmerge intrinsic. New isel patterns and tests are
added to support this.
-The SemaChecking for the immediate of vset_v/vget_v has been removed.
Determining the valid range is harder now. I've added masking to
ManualCodegen to ensure valid IR for invalid input.
This reduces the number of builtins from ~25000 to ~1100.
Reviewed By: HsiangKai
Differential Revision: https://reviews.llvm.org/D112102
This patch splits the existing SveVectorBits LangOpt into VScaleMin and
VScaleMax LangOpts such that we can represent such an option. The cc1
option has also been split into -mvscale-{min,max}=<n> options so that the
cc1 arguments better reflect the vscale_range IR attribute.
Differential Revision: https://reviews.llvm.org/D111790
Zvamo is not part of the 1.0 V spec. Remove the intrinsics
for now. This helps reduce clang binary size and lit test time.
Reviewed By: HsiangKai
Differential Revision: https://reviews.llvm.org/D111692
Based on post-commit review discussion on
2bd8493847 with Richard Smith.
Other uses of forcing HasEmptyPlaceHolder to false seem OK to me -
they're all around pointer/reference types where the pointer/reference
token will appear at the rightmost side of the left side of the type
name, so they make nested types (eg: the "int" in "int *") behave as
though there is a non-empty placeholder (because the "*" is essentially
the placeholder as far as the "int" is concerned).
This was originally committed in 277623f4d5
Reverted in f9ad1d1c77 due to breakages
outside of clang - lldb seems to have some strange/strong dependence on
"char [N]" versus "char[N]" when printing strings (not due to that name
appearing in DWARF, but probably due to using clang to stringify type
names) that'll need to be addressed, plus a few other odds and ends in
other subprojects (clang-tools-extra, compiler-rt, etc).
This reverts commit 121b2252de.
The following code causes a crash in some circumstances:
struct k {
~k() __attribute__((annotate(""))) {}
};
void m() { k(); }
Add relaxed. f32x4.min, f32x4.max, f64x2.min, f64x2.max. These are only
exposed as builtins, and require user opt-in.
Differential Revision: https://reviews.llvm.org/D112146
This change implements new DAG nodes TABLE_GET/TABLE_SET, and lowering
methods for load and stores of reference types from IR arrays. These
global LLVM IR arrays represent tables at the Wasm level.
Differential Revision: https://reviews.llvm.org/D111154
Add i8x16 relaxed_swizzle instructions. These are only
exposed as builtins, and require user opt-in.
Differential Revision: https://reviews.llvm.org/D112022
Now that the legacy PM is deprecated for the optimization pipeline, we
can start deleting legacy PM tests.
For tests that test both PMs, merge the RUN lines.
Delete tests specific to the legacy PM.
This patch updates test files after D105169.
Autogenerated test codes are changed by `utils/update_cc_test_checks.py,` and non-autogenerated test codes are changed as follows:
(1) I wrote a python script that (partially) updates the tests using regex: {F18594904} The script is not perfect, but I believe it gives hints about which patterns are updated to have `noundef` attached.
(2) The remaining tests are updated manually.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D108453
This patch remove the override in AIX target,
so the int128 is enabled in 64 bit mode or with ForceEnableInt128.
Reviewed By: lkail
Differential Revision: https://reviews.llvm.org/D111078
This was committed as ec6c847179, but then reverted after a failure
in: https://lab.llvm.org/buildbot/#/builders/84/builds/13983
I was not able to reproduce the problem, but I added an extra check
for a NULL QualType just in case.
Original comit message:
The patch adds missing diagnostics for cases like:
float F3 = ((__float128)F1 * (__float128)F2) / 2.0f;
Sema::checkDeviceDecl (renamed to checkTypeSupport) is changed to work
with a type without the corresponding ValueDecl. It is also refactored
so that host diagnostics for unsupported types can be added here as
well.
Differential Revision: https://reviews.llvm.org/D109315
Looks like lldb has some issues with this - somehow it causes lldb to
treat a "char[N]" type as an array of chars (prints them out
individually) but a "char [N]" is printed as a string. (even though the
DWARF doesn't have this string in it - it's something to do with the
string lldb generates for itself using clang)
This reverts commit 277623f4d5.
Based on post-commit review discussion on
2bd8493847 with Richard Smith.
Other uses of forcing HasEmptyPlaceHolder to false seem OK to me -
they're all around pointer/reference types where the pointer/reference
token will appear at the rightmost side of the left side of the type
name, so they make nested types (eg: the "int" in "int *") behave as
though there is a non-empty placeholder (because the "*" is essentially
the placeholder as far as the "int" is concerned).
The builtin __rlwnm is currently constrained to accept only constants
for the shift parameter but the instructions emitted for it have no such
constraint, this patch allows the builtins to accept variable shift.
Reviewed By: NeHuang, amyk
Differential Revision: https://reviews.llvm.org/D111229
This reverts commit 97f0c63783.
As discussed in https://reviews.llvm.org/D110684, it increased the
compile time and the binary size of clang more than 1%. I reverted
this patch first to think about a better way to do it.
GCC 9.1 removed Intel MPX support. Linux kernel removed MPX in 2019.
glibc 2.35 will remove MPX.
Our support is limited: we support assembling of bndmov but not bnd.
Just remove it.
Reviewed By: pengfei, skan
Differential Revision: https://reviews.llvm.org/D111517
Rename vfredsum and vfwredsum to vfredusum and vfwredusum. Add aliases for vfredsum and vfwredsum.
Reviewed By: luismarques, HsiangKai, khchen, frasercrmck, kito-cheng, craig.topper
Differential Revision: https://reviews.llvm.org/D105690
Current btf_tag is applied to declaration only.
Per discussion in https://reviews.llvm.org/D111199,
we plan to introduce btf_type_tag attribute for types.
So rename btf_tag to btf_decl_tag to make it easily
differentiable from btf_type_tag.
Differential Revision: https://reviews.llvm.org/D111588
In the original design, we levarage _mt intrinsics to define macros for
_m intrinsics. Such as,
```
__builtin_rvv_vadd_vv_i8m1_mt((vbool8_t)(op0), (vint8m1_t)(op1), (vint8m1_t)(op2), (vint8m1_t)(op3), (size_t)(op4), (size_t)VE_TAIL_AGNOSTIC)
```
However, we could not define generic interface for mask intrinsics any
more due to clang_builtin_alias only accepts clang builtins as its
argument.
In the example,
```
__rvv_overloaded
__attribute__((clang_builtin_alias(__builtin_rvv_vadd_vv_i8m1_mt)))
vint8m1_t vadd(vbool8_t op0, vint8m1_t op1, vint8m1_t op2, vint8m1_t
op3, size_t op4, size_t op5);
```
op5 is the tail policy argument. When users want to use vadd generic
interface for masked vector add, they need to specify tail policy in the
previous design. In this patch, we define _m intrinsics as clang
builtins to solve the problem.
Differential Revision: https://reviews.llvm.org/D110684
When AnnotateAttr is on a function, AddGlobalAnnotations is only called
in CodeGenModule::EmitGlobalFunctionDefinition which means AnnotateAttr
on function declaration without function body will be ignored.
The patch will move AddGlobalAnnotations to
CodeGenModule::SetFunctionAttributes, so with or without function body,
the AnnotateAttr will get code gen for a function.
It'll help case when AnnotateAttr is on external function, and the
AnnotateAttr will be consumed in IR level.
For example, a pass to collect num of uses for functions with
__attribute((annotate("count_use"))) after optimizations,
As long as there's __attribute((annotate("count_use"))), function with
or without function body should be counted.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D111109
Patch by: python3kgae (Xiang Li)
The SSE4 header (smmintrin.h) should include SSSE3 (tmmintrin.h) instead
of SSE2 (emmintrin.h).
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D111482
fae0dfa implemented the new __ibm128 type, this patch enables its
complex form.
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D109948
There are functions where we do not want function instrumentation which is why we have `__attribute__((no_instrument_function))`. Extending this functionality to disable instrumentation for Objective-C methods as well. Objective C methods like `+load` run premain and having instrumentation on them causes runtime errors depending on the implementation of `__cyg_profile_func_enter` etc. functions
Reviewed By: rjmccall, aaron.ballman
Differential Revision: https://reviews.llvm.org/D111286