CUDA/HIP programs use __noinline__ like a keyword e.g.
__noinline__ void foo() {} since __noinline__ is defined
as a macro __attribute__((noinline)) in CUDA/HIP runtime
header files.
However, gcc and clang supports __attribute__((__noinline__))
the same as __attribute__((noinline)). Some C++ libraries
use __attribute__((__noinline__)) in their header files.
When CUDA/HIP programs include such header files,
clang will emit error about invalid attributes.
This patch fixes this issue by supporting __noinline__ as
a keyword, so that CUDA/HIP runtime could remove
the macro definition.
Reviewed by: Aaron Ballman, Artem Belevich
Differential Revision: https://reviews.llvm.org/D124866
CUDA/HIP determines whether a function can be called based on
the device/host attributes of callee and caller. Clang assumes the
caller is CurContext. This is correct in most cases, however, it is
not correct in OpenMP parallel region when CUDA/HIP program
is compiled with -fopenmp. This causes incorrect overloading
resolution and missed diagnostics.
To get the correct caller, clang needs to chase the parent chain
of DeclContext starting from CurContext until a function decl
or a lambda decl is reached. Sema API is adapted to achieve that
and used to determine the caller in hostness check.
Reviewed by: Artem Belevich, Richard Smith
Differential Revision: https://reviews.llvm.org/D121765
constexpr var may be initialized with address of non-const variable.
In this case the initializer is not constant in device compilation.
This has been handled for const vars but not for constexpr vars.
This patch makes handling of const var and constexpr var
consistent.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D119615
Fixes: https://github.com/llvm/llvm-project/issues/53780
A significant number of our tests in C accidentally use functions
without prototypes. This patch converts the function signatures to have
a prototype for the situations where the test is not specific to K&R C
declarations. e.g.,
void func();
becomes
void func(void);
This is the third batch of tests being updated (there are a significant
number of other tests left to be updated).
Currently clang treats host var address as constant in device compilation,
which causes const vars initialized with host var address promoted to
device variables incorrectly and results in undefined symbols.
This patch fixes that.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D118153
Fixes: SWDEV-309881
Change-Id: I0a69357063c6f8539ef259c96c250d04615f4473
Consider case where `__int128` type is supported by the host target but
not by a device target (e.g. spirv*). Clang emits an error message for
unsupported type even if the device code does not use it. This patch
fixes this issue by emitting the error message when the device code
attempts to use the unsupported type.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D111047
There is no need to check for deferred diag when device compilation or target is
not given. This results in considerable build time improvement in some cases.
Differential Revision: https://reviews.llvm.org/D109175
HIP currently diagnose capture of this pointer in device lambda in
host member functions. If this pointer points to managed memory,
it can be used in both device and host functions. Under this
situation, capturing this pointer in device lambda functions
in host member functions is valid usage. Change the diagnostic
about capturing this pointer to warning.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D108493
Although clang is able to defer overloading resolution
diagnostics for common functions. It does not defer
overloading resolution caused diagnostics for overloaded
operators.
This patch extends the existing deferred
diagnostic mechanism and defers a diagnostic caused
by overloaded operator.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D104505
Recently we added diagnosing ODR-use of host variables
in device functions, which includes ODR-use of const
host variables since they are not really emitted on
device side. This caused regressions since we used
to allow ODR-use of const host variables in device
functions.
This patch allows ODR-use of const variables in device
functions if the const variables can be statically initialized
and have an empty dtor. Such variables are marked with
implicit constant attrs and emitted on device side. This is
in line with what clang does for constexpr variables.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D103108
Currently clang and nvcc use c++14 as default std for C++.
gcc 11 even uses c++17 as default std for C++. However,
clang uses c++98 as default std for CUDA/HIP.
As c++14 has been well adopted and became default for
clang, it seems reasonable to use c++14 as default std
for CUDA/HIP.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D103221
variables emitted on both host and device side with different addresses
when ODR-used by host function should not cause device side counter-part
to be force emitted.
This fixes the regression caused by https://reviews.llvm.org/D102237
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D102801
It doesn't really make sense to emit language specific diagnostics
in a discarded statement, and suppressing these diagnostics results in a
programming pattern that many users will feel is quite useful.
Basically, this makes sure we only emit errors from the 'true' side of a
'constexpr if'.
It does this by making the ExprEvaluatorBase type have an opt-in option
as to whether it should visit discarded cases.
Differential Revision: https://reviews.llvm.org/D102251
Add builtin function __builtin_get_device_side_mangled_name
to get device side manged name for functions and global
variables, which can be used to get symbol address of kernels
or variables by mangled name in dynamically loaded
bundled code objects at run time.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D99301
For -fgpu-rdc mode, static device vars in different TU's may have the same name.
To support accessing file-scope static device variables in host code, we need to give them
a distinct name and external linkage. This can be done by postfixing each static device variable with
a distinct CUID (Compilation Unit ID) hash.
Since the static device variables have different name across compilation units, now we let
them have external linkage so that they can be looked up by the runtime.
Reviewed by: Artem Belevich, and Jon Chesterfield
Differential Revision: https://reviews.llvm.org/D85223
Defer constant checking of dependent initializer to template instantiation
since it cannot be done for dependent values.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D95840
This patch implements codegen for __managed__ variable attribute for HIP.
Diagnostics will be added later.
Differential Revision: https://reviews.llvm.org/D94814
Defaulted destructor was treated inconsistently, compared to other
compiler-generated functions.
When Sema::IdentifyCUDATarget() got called on just-created dtor which didn't
have implicit __host__ __device__ attributes applied yet, it would treat it as a
host function. That happened to (sometimes) hide the error when dtor referred
to a host-only functions.
Even when we had identified defaulted dtor as a HD function, we still treated it
inconsistently during selection of usual deallocators, where we did not allow
referring to wrong-side functions, while it is allowed for other HD functions.
This change brings handling of defaulted dtors in line with other HD functions.
Differential Revision: https://reviews.llvm.org/D94732
`isCUDADeviceBuiltinSurfaceType()`/`isCUDADeviceBuiltinTextureType()` do not
work on dependent types as they rely on specific type attributes.
Differential Revision: https://reviews.llvm.org/D92893
This patch implements correct hostness based overloading resolution
in isBetterOverloadCandidate.
Based on hostness, if one candidate is emittable whereas the other
candidate is not emittable, the emittable candidate is better.
If both candidates are emittable, or neither is emittable based on hostness, then
other rules should be used to determine which is better. This is because
hostness based overloading resolution is mostly for determining
viability of a function. If two functions are both viable, other factors
should take precedence in preference.
If other rules cannot determine which is better, CUDA preference will be
used again to determine which is better.
However, correct hostness based overloading resolution
requires overloading resolution diagnostics to be deferred,
which is not on by default. The rationale is that deferring
overloading resolution diagnostics may hide overloading reslolutions
issues in header files.
An option -fgpu-exclude-wrong-side-overloads is added, which is off by
default.
When -fgpu-exclude-wrong-side-overloads is off, keep the original behavior,
that is, exclude wrong side overloads only if there are same side overloads.
This may result in incorrect overloading resolution when there are no
same side candates, but is sufficient for most CUDA/HIP applications.
When -fgpu-exclude-wrong-side-overloads is on, enable deferring
overloading resolution diagnostics and enable correct hostness
based overloading resolution, i.e., always exclude wrong side overloads.
Differential Revision: https://reviews.llvm.org/D80450
This patch diagnoses invalid references of global host variables in device,
global, or host device functions.
Differential Revision: https://reviews.llvm.org/D91281
callee in constant evaluation.
We previously made a deep copy of function parameters of class type when
passing them, resulting in the destructor for the parameter applying to
the original argument value, ignoring any modifications made in the
function body. This also meant that the 'this' pointer of the function
parameter could be observed changing between the caller and the callee.
This change completely reimplements how we model function parameters
during constant evaluation. We now model them roughly as if they were
variables living in the caller, albeit with an artificially reduced
scope that covers only the duration of the function call, instead of
modeling them as temporaries in the caller that we partially "reparent"
into the callee at the point of the call. This brings some minor
diagnostic improvements, as well as significantly reduced stack usage
during constant evaluation.
callee in constant evaluation.
We previously made a deep copy of function parameters of class type when
passing them, resulting in the destructor for the parameter applying to
the original argument value, ignoring any modifications made in the
function body. This also meant that the 'this' pointer of the function
parameter could be observed changing between the caller and the callee.
This change completely reimplements how we model function parameters
during constant evaluation. We now model them roughly as if they were
variables living in the caller, albeit with an artificially reduced
scope that covers only the duration of the function call, instead of
modeling them as temporaries in the caller that we partially "reparent"
into the callee at the point of the call. This brings some minor
diagnostic improvements, as well as significantly reduced stack usage
during constant evaluation.
callee in constant evaluation.
We previously made a deep copy of function parameters of class type when
passing them, resulting in the destructor for the parameter applying to
the original argument value, ignoring any modifications made in the
function body. This also meant that the 'this' pointer of the function
parameter could be observed changing between the caller and the callee.
This change completely reimplements how we model function parameters
during constant evaluation. We now model them roughly as if they were
variables living in the caller, albeit with an artificially reduced
scope that covers only the duration of the function call, instead of
modeling them as temporaries in the caller that we partially "reparent"
into the callee at the point of the call. This brings some minor
diagnostic improvements, as well as significantly reduced stack usage
during constant evaluation.
In CUDA/HIP a function may become implicit host device function by
pragma or constexpr. A host device function is checked in both
host and device compilation. However it may be emitted only
on host or device side, therefore the diagnostics should be
deferred until it is known to be emitted.
Currently clang is only able to defer certain diagnostics. This causes
false alarms and limits the usefulness of host device functions.
This patch lets clang defer all overloading resolution diagnostics for host device functions.
An option -fgpu-defer-diag is added to control this behavior. By default
it is off.
It is NFC for other languages.
Differential Revision: https://reviews.llvm.org/D84364
When a device function calls a host function or vice versa, this is wrong-sided
reference. Currently clang immediately diagnose it. This is different from nvcc
behavior, where it is diagnosed only if the function is really emitted.
Current clang behavior causes false alarms for valid use cases.
This patch let clang always defer diagnostics for wrong-sided
reference.
Differential Revision: https://reviews.llvm.org/D83893
This patch let lambda be host device by default and adds diagnostics for
capturing host variable by reference in device lambda.
Differential Revision: https://reviews.llvm.org/D78655
This reverts commit 263390d4f5.
This can still cause bogus errors:
eigen3/Eigen/src/Core/CoreEvaluators.h:94:38: error: call to implicitly-deleted copy constructor of 'unary_evaluator<Eigen::Inverse<Eigen::Matrix<double, 4, 4, 0, 4, 4>>>'
thrust/system/detail/generic/for_each.h:49:3: error: implicit instantiation of undefined template
'thrust::detail::STATIC_ASSERTION_FAILURE<false>'
recommit e03394c6a6 with fix
When implicit HD function calls a function in device compilation,
if one candidate is an implicit HD function, current resolution rule is:
D wins over HD and H
HD and H are equal
this caused regression when there is an otherwise worse D candidate
This patch changes that to
D, HD and H are all equal
The rationale is that we already know for host compilation there is already
a valid candidate in HD and H candidates that will not cause error. Allowing
HD and H gives us a fall back candidate that will not cause error. If D wins,
that means D has to be a better match otherwise, therefore D should also
be a valid candidate that will not cause error. In this way, we can guarantee
no regression.
Differential Revision: https://reviews.llvm.org/D80450
constexpr variables are compile time constants and implicitly const, therefore
they are safe to emit on both device and host side. Besides, in many cases
they are intended for both device and host, therefore it makes sense
to emit them on both device and host sides if necessary.
In most cases constexpr variables are used as rvalue and the variables
themselves do not need to be emitted. However if their address is taken,
then they need to be emitted.
For C++14, clang is able to handle that since clang emits them with
available_externally linkage together with the initializer.
However for C++17, the constexpr static data member of a class or template class
become inline variables implicitly. Therefore they become definitions with
linkonce_odr or weak_odr linkages. As such, they can not have available_externally
linkage.
This patch fixes that by adding implicit constant attribute to
file scope constexpr variables and constexpr static data members
in device compilation.
Differential Revision: https://reviews.llvm.org/D79237