Update KEYALL to cover KEYCUDA. Introduce KEYMAX and
a generic way to update KEYALL.
Reviewed by: Dan Liew
Differential Revision: https://reviews.llvm.org/D125396
Create dxc_D as alias to option D which Define <macro> to <value> (or 1 if <value> omitted).
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D125338
D87451 added -mignore-xcoff-visibility for AIX targets and made it the default (which mimicked the behaviour of the XL 16.1 compiler on AIX).
However, ignoring hidden visibility has unwanted side effects and some libraries depend on visibility to hide non-ABI facing entities from user headers and
reserve the right to change these implementation details based on this (https://libcxx.llvm.org/DesignDocs/VisibilityMacros.html). This forces us to use
internal linkage fallbacks for these cases on AIX and creates an unwanted divergence in implementations on the plaform.
For these reasons, it's preferable to not add -mignore-xcoff-visibility by default, which is what this patch does.
Reviewed By: DiggerLin
Differential Revision: https://reviews.llvm.org/D125141
In order to do offloading compilation we need to embed files into the
host and create fatbainaries. Clang uses a special binary format to
bundle several files along with their metadata into a single binary
image. This is currently performed using the `-fembed-offload-binary`
option. However this is not very extensibile since it requires changing
the command flag every time we want to add something and makes optional
arguments difficult. This patch introduces a new tool called
`clang-offload-packager` that behaves similarly to CUDA's `fatbinary`.
This tool takes several input files with metadata and embeds it into a
single image that can then be embedded in the host.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D125165
Due to how parseBracedList always stopped on the first closing angle
bracket and was used in parsing angle bracketed expression inside concept
definition, nested brackets inside concepts were parsed incorrectly.
nextToken() call before calling parseBracedList is required because
we were processing opening angle bracket inside parseBracedList second
time leading to incorrect logic after my fix.
Fixes https://github.com/llvm/llvm-project/issues/54943
Fixes https://github.com/llvm/llvm-project/issues/54837
Reviewed By: HazardyKnusperkeks, curdeius
Differential Revision: https://reviews.llvm.org/D123896
This patch adds the necessary code generation to create the wrapper code
that registers all the globals in CUDA. We create the necessary
functions and iterate through the list of
`__start_cuda_offloading_entries` to find which globals must be
registered. This is very similar to the code generation done currently
in Clang for non-rdc builds, but here we are registering a fully linked
fatbinary and finding the globals via the above sections.
With this we should be able to fully support basic RDC / LTO building of CUDA
code.
It's also worth noting that this does not include the necessary PTX to JIT the
image, so to use this support the offloading architecture must match the
system's architecture.
Depends on D123810
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D123812
This patch adds the initial support for wrapping CUDA images. This
requires changing some of the logic for how we bundle images. We now
need to copy the image for all kinds that are active for the
architecture. Then we need to run a separate wrapping job if the Kind is
Cuda. For cuda wrapping we need to use the `fatbinary` program from the
CUDA SDK to bundle all the binaries together. This is then passed to a
new function to perfom the actual module code generation that will be
implemented in a later patch.
Depends on D120273 D123471
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D123810
The changes made in D123460 generalized the code generation for OpenMP's
offloading entries. We can use the same scheme to register globals for
CUDA code. This patch adds the code generation to create these
offloading entries when compiling using the new offloading driver mode.
The offloading entries are simple structs that contain the information
necessary to register the global. The struct used is as follows:
```
Type struct __tgt_offload_entry {
void *addr; // Pointer to the offload entry info.
// (function or global)
char *name; // Name of the function or global.
size_t size; // Size of the entry info (0 if it a function).
int32_t flags;
int32_t reserved;
};
```
Currently CUDA handles RDC code generation by deferring the registration
of globals in the current TU to a callback function containing the
modules ID. Later all the module IDs will be used to register all of the
globals at once. Rather than mimic this, offloading entries allow us to
mimic the way OpenMP registers globals. That is, we create a simple
global struct for each device global to be registered. These are placed
at a special section `cuda_offloading_entires`. Because this section is
a valid C-identifier, the linker will profide a `__start` and `__stop`
pointer that we can use to iterate and register all globals at runtime.
the registration requires a flag variable to indicate which registration
function to use. I have assigned the flags somewhat arbitrarily, but
these use the following values.
Kernel: 0
Variable: 0
Managed: 1
Surface: 2
Texture: 3
Depends on D120272
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D123471
This adds the -Wgnu-line-marker diagnostic flag, grouped under -Wgnu,
to warn about use of the GNU linemarker preprocessor extension.
Fixes#55067
Differential Revision: https://reviews.llvm.org/D124534
This adds support for variable stride with the val, uval, and ref linear
modifiers. Previously only the no modifer type ls<argno> was supported.
val -> Ls<argno>
uval -> Us<argno>
ref -> Rs<argno>
Differential Revision: https://reviews.llvm.org/D125330
CUDA/HIP programs use __noinline__ like a keyword e.g.
__noinline__ void foo() {} since __noinline__ is defined
as a macro __attribute__((noinline)) in CUDA/HIP runtime
header files.
However, gcc and clang supports __attribute__((__noinline__))
the same as __attribute__((noinline)). Some C++ libraries
use __attribute__((__noinline__)) in their header files.
When CUDA/HIP programs include such header files,
clang will emit error about invalid attributes.
This patch fixes this issue by supporting __noinline__ as
a keyword, so that CUDA/HIP runtime could remove
the macro definition.
Reviewed by: Aaron Ballman, Artem Belevich
Differential Revision: https://reviews.llvm.org/D124866
Summary:
We use the `--offload-new-driver` option to enable offload code
embedding. The check for when to do this was flawed and was enabling it
too early in the case of OpenMP, causing a segfault when dereferencing
the offloading toolchain.
Specifically for: !tbaa, !tbaa.struct, !annotation, !srcloc, !nosanitize.
The goal is to avoid test brittleness caused by hardcoded values.
Differential Revision: https://reviews.llvm.org/D123273
Add mangling for linear parameters specified with ref, uval, and val
for 'omp declare simd' vector functions.
Add missing stride for linear this parameters.
Differential Revision: https://reviews.llvm.org/D125269
The controlling expression of a _Generic selection expression undergoes
lvalue conversion, array conversion, and function conversion before
picking the association. This means that array types, function types,
and qualified types are all unreachable code if they're used as an
association. I've been caught by this twice in the past few months and
I figure that if a WG14 member can't seem to remember this rule, users
are also likely to struggle with it. So this adds an on-by-default
unreachable code diagnostic for generic selection expression
associations.
Note, we don't have to worry about function types as those are already
a constraint violation which generates an error.
Differential Revision: https://reviews.llvm.org/D125259
In case of placement new, if we do not know the alignment of the
operand, we can't assume it has the preferred alignment. It might be
e.g. a pointer to a struct member which follows ABI alignment rules.
This makes UBSAN no longer report "constructor call on misaligned
address" when constructing a double into a struct field of type double
on i686. The psABI specifies an alignment of 4 bytes, but the preferred
alignment used by Clang is 8 bytes.
We now use ABI alignment for allocating new as well, as the preferred
alignment should be used for over-aligning e.g. local variables, which
isn't relevant for ABI code dealing with operator new. AFAICT there
wouldn't be problems either way though.
Fixes#54845.
Differential Revision: https://reviews.llvm.org/D124736
This is generally a better default for tools other than the compiler, which
shouldn't assume a PCH file on disk is something they can consume.
Preserve the old behavior in places associated with libclang/c-index-test
(including ASTUnit) as there are tests relying on it and most important
consumers are out-of-tree. It's unclear whether the tests are specifically
trying to test this functionality, and what the downstream implications of
removing it are. Hopefully someone more familiar can clean this up in future.
Differential Revision: https://reviews.llvm.org/D125149
Currently for SVE2 ACLE builtins, single tests are used to verify both
clang code generation (when the feature is available) and semantic
error/warning messages (when the feature is unavailable). This
patch moves the semantic testing for the target feature flag into
dedicated Sema tests.
Differential Revision: https://reviews.llvm.org/D124850
Currently for SVE ACLE builtins, single tests are used to verify both
clang code generation (when the feature is available) and semantic
error/warning messages (when the feature is unavailable). This
patch moves the semantic testing into dedicated Sema tests.
Differential Revision: https://reviews.llvm.org/D124924
Summary:
By evaluating both children states, now we are capable of discovering
infeasible parent states. In this patch, `assume` is implemented in the terms
of `assumeDuali`. This might be suboptimal (e.g. where there are adjacent
assume(true) and assume(false) calls, next patches addresses that). This patch
fixes a real CRASH.
Fixes https://github.com/llvm/llvm-project/issues/54272
Differential Revision:
https://reviews.llvm.org/D124758
In some cases a parent State is already infeasible, but we recognize
this only if an additonal constraint is added. This patch is the first
of a series to address this issue. In this patch `assumeDual` is changed
to clone the parent State but with an `Infeasible` flag set, and this
infeasible-parent is returned both for the true and false case. Then
when we add a new transition in the exploded graph and the destination
is marked as infeasible, the node will be a sink node.
Related bug:
https://github.com/llvm/llvm-project/issues/50883
Actually, this patch does not solve that bug in the solver, rather with
this patch we can handle the general parent-infeasible cases.
Next step would be to change the State API and require all checkers to
use the `assume*Dual` API and deprecate the simple `assume` calls.
Hopefully, the next patch will introduce `assumeInBoundDual` and will
solve the CRASH we have here:
https://github.com/llvm/llvm-project/issues/54272
Differential Revision: https://reviews.llvm.org/D124674
It should be useful clang-fuzzer itself, though my own motivation is
to use this in fuzzing clang-pseudo. (clang-tools-extra/pseudo/fuzzer).
Differential Revision: https://reviews.llvm.org/D125166
Ensures an -Wenum-conversion warning happens when one of the enums is
signed and the other is unsigned. Also adds a test file to verify these
warnings.
This warning would not happen since the -Wsign-conversion would make a
diagnostic then return, never allowing the -Wenum-conversion checks.
For example:
C
enum PE { P = -1 };
enum NE { N };
enum NE conv(enum PE E) { return E; }
Before this would only create a diagnostic with -Wsign-conversion and
never on -Wenum-conversion. Now it will create a diagnostic for both
-Wsign-conversion and -Wenum-conversion.
I could change it to just warn on -Wenum-conversion as that was what I
initially did. Seeing PR35200 (or GitHub Issue 316268), I let both
diagnostics check so that the sign conversion could generate a warning.
We know we're going to overwrite it anyway.
It'd be a bit of work to coordinate not generating it at all, but setting this
flag avoids generating ~10k of the 13k string.
Differential Revision: https://reviews.llvm.org/D125180
This patch restores the symmetry between how operator new and operator delete
are handled by also inlining the content of operator delete when possible.
Patch by Fred Tingaud.
Reviewed By: martong
Differential Revision: https://reviews.llvm.org/D124845
Like regular assignment, compound assignment operators can be assumed to
write to their left-hand side operand. So we strengthen the requirements
there. (Previously only the default read access had been required.)
Just like operator->, operator->* can also be assumed to dereference the
left-hand side argument, so we require read access to the pointee. This
will generate new warnings if the left-hand side has a pt_guarded_by
attribute. This overload is rarely used, but it was trivial to add, so
why not. (Supporting the builtin operator requires changes to the TIL.)
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D124966
This includes a fix for the libc++ issue I ran across with friend
declarations not properly being identified as overloads.
This reverts commit 45c07db31c.
D117829 added the generic "__builtin_reduce_mul" which we can use to replace the x86 specific integer mul reduction builtins - internally these were mapping to the same intrinsic already so there are no test changes required.
Differential Revision: https://reviews.llvm.org/D125222
Similar to the existing bitwise reduction builtins, this lowers to a llvm.vector.reduce.mul intrinsic call.
For other reductions, we've tried to share builtins for float/integer vectors, but the fmul reduction intrinsic also take a starting value argument and can either do unordered or serialized, but not reduction-trees as specified for the builtins. However we address fmul support this shouldn't affect the integer case.
Differential Revision: https://reviews.llvm.org/D117829
Fixes https://github.com/llvm/llvm-project/issues/54522.
This fixes regression introduced in 5e5efd8a91.
Before the culprit commit, macros in WhitespaceSensitiveMacros were correctly formatted even if their closing parenthesis weren't followed by semicolon (or, to be precise, when they were followed by a newline).
That commit changed the type of the macro token type from TT_UntouchableMacroFunc to TT_FunctionLikeOrFreestandingMacro.
Correct formatting (with `WhitespaceSensitiveMacros = ['FOO']`):
```
FOO(1+2)
FOO(1+2);
```
Regressed formatting:
```
FOO(1 + 2)
FOO(1+2);
```
Reviewed By: HazardyKnusperkeks, owenpan, ksyx
Differential Revision: https://reviews.llvm.org/D123676