Currently for explicit template function instantiation in CUDA/HIP device
compilation clang emits instantiated kernel with external linkage
and instantiated device function with internal linkage.
This is fine for -fno-gpu-rdc since there is only one TU.
However this causes duplicate symbols for kernels for -fgpu-rdc if
the same instantiation happen in multiple TU. Or missing symbols
if a device function calls an explicitly instantiated template function
in a different TU.
To make explicit template function instantiation work for
-fgpu-rdc we need to follow the C++ linkage paradigm, i.e.
use weak_odr linkage.
Differential Revision: https://reviews.llvm.org/D90311
D17779: host-side shadow variables of external declarations of device-side
global variables have internal linkage and are referenced by
`__cuda_register_globals`.
nvcc from CUDA 11 does not allow `__device__ inline` or `__device__ constexpr`
(C++17 inline variables) but clang has incorrectly supported them for a while:
```
error: A __device__ variable cannot be marked constexpr
error: An inline __device__/__constant__/__managed__ variable must have internal linkage when the program is compiled in whole program mode (-rdc=false)
```
If such a variable (which has a comdat group) is discarded (a copy from another
translation unit is prevailing and selected), accessing the variable from
outside the section group (`__cuda_register_globals`) is a violation of the ELF
specification and will be rejected by linkers:
> A symbol table entry with STB_LOCAL binding that is defined relative to one of a group's sections, and that is contained in a symbol table section that is not part of the group, must be discarded if the group members are discarded. References to this symbol table entry from outside the group are not allowed.
As a workaround, don't register such inline variables for now.
(If we register the variables in all TUs, we will keep multiple instances of the shadow and break the C++ semantics for inline variables).
We should reject such variables in Sema but our internal users need some time to migrate.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D88786
To facilitate faster loading of device binaries and share them among processes,
HIP runtime favors their alignment being 4096 bytes. HIP runtime can load
unaligned device binaries, however, aligning them at 4096 bytes results in
faster loading and less shared memory usage.
This patch adds an option -bundle-align to clang-offload-bundler which allows
bundles to be aligned at specified alignment. By default it is 1, which is NFC
compared to existing format.
This patch then aligns embedded fat binary and device binary inside fat binary
at 4096 bytes.
It has been verified this change does not cause significant overall file size increase
for typical HIP applications (less than 1%).
Differential Revision: https://reviews.llvm.org/D88734
A static device variable may be accessed in host code through
cudaMemCpyFromSymbol etc. Currently clang does not
emit the static device variable if it is only referenced by
host code, which causes host code to fail at run time.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D88115
- After loading builtin bitcode for linking, skip adding default
function attributes on LLVM intrinsics as their attributes are
well-defined and retrieved directly from internal definitions. Adding
extra attributes on intrinsics results in inconsistent result when
`-save-temps` is present. Also, that makes few optimizations
conservative.
Differential Revision: https://reviews.llvm.org/D87761
Temporarily revert commit 04abbb3a78
due to regressions in some HIP apps due backend issues revealed by
this change.
Will re-commit it when backend issues are fixed.
- Skip generating profile data on `__global__` function in the host
compilation. It's a host-side stub function only and don't have
profile instrumentation generated on the real function body. The extra
profile data results in the malformed instrumentation profile data.
- Skip generating region mapping on functions in the wrong-side, i.e.,
+ For the device compilation, skip host-only functions; and,
+ For the host compilation, skip device-only functions (including
`__global__` functions.)
- As the device-side profiling is not ready yet, only host-side profile
code generation is checked.
Differential Revision: https://reviews.llvm.org/D85276
Add address space to indirect abi info and use it for kernels.
Previously, indirect arguments assumed assumed a stack passed object
in the alloca address space using byval. A stack pointer is unsuitable
for kernel arguments, which are passed in a separate, constant buffer
with a different address space.
Start using the new byref for aggregate kernel arguments. Previously
these were emitted as raw struct arguments, and turned into loads in
the backend. These will lower identically, although with byref you now
have the option of applying an explicit alignment. In the future, a
reasonable implementation would use byref for all kernel arguments
(this would be a practical problem at the moment due to losing things
like noalias on pointer arguments).
This is mostly to avoid fighting the optimizer's treatment of
aggregate load/store. SROA and instcombine both turn aggregate loads
and stores into a long sequence of element loads and stores, rather
than the optimizable memcpy I would expect in this situation. Now an
explicit memcpy will be introduced up-front which is better understood
and helps eliminate the alloca in more situations.
This skips using byref in the case where HIP kernel pointer arguments
in structs are promoted to global pointers. At minimum an additional
patch is needed to allow coercion with indirect arguments. This also
skips using it for OpenCL due to the current workaround used to
support kernels calling kernels. Distinct function bodies would need
to be generated up front instead of emitting an illegal call.
nvcc supports accessing file-scope static device variables in host code by host APIs
like cudaMemcpyToSymbol etc.
CUDA/HIP let users access device variables in host code by shadow variables. In host compilation,
clang emits a shadow variable for each device variable, and calls __*RegisterVariable to
register it in init function. The address of the shadow variable and the device side mangled
name of the device variable is passed to __*RegisterVariable. Runtime looks up the symbol
by name in the device binary to find the address of the device variable.
The problem with static device variables is that they have internal linkage, therefore their
name may be changed by the linker if there are multiple symbols with the same name. Also
they end up as local symbols in the elf file, whereas the runtime only looks up the global symbols.
Another reason for making the static device variables external linkage is that they may be
initialized externally by host code and their final value may be accessed by host code
after kernel execution, therefore they actually have external linkage. Giving them internal
linkage will cause incorrect optimizations on them.
To support accessing static device var in host code for -fno-gpu-rdc mode, change the intnernal
linkage to external linkage. The name does not need change since there is only one TU for
-fno-gpu-rdc mode. Also the externalization is done only if the device static var is referenced
by host code.
Differential Revision: https://reviews.llvm.org/D80858
This patch let lambda be host device by default and adds diagnostics for
capturing host variable by reference in device lambda.
Differential Revision: https://reviews.llvm.org/D78655
Summary:
- `ptrtoint` and `inttoptr` are defined as no-op casts if the integer
value as the same size as the pointer value. The pair of
`ptrtoint`/`inttoptr` is in fact a no-op cast sequence between
different address spaces. Teach `infer-address-spaces` to handle them
like a `bitcast`.
Reviewers: arsenm, chandlerc
Subscribers: jvesely, wdng, nhaehnle, hiraditya, kerbowa, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D81938
constexpr variables are compile time constants and implicitly const, therefore
they are safe to emit on both device and host side. Besides, in many cases
they are intended for both device and host, therefore it makes sense
to emit them on both device and host sides if necessary.
In most cases constexpr variables are used as rvalue and the variables
themselves do not need to be emitted. However if their address is taken,
then they need to be emitted.
For C++14, clang is able to handle that since clang emits them with
available_externally linkage together with the initializer.
However for C++17, the constexpr static data member of a class or template class
become inline variables implicitly. Therefore they become definitions with
linkonce_odr or weak_odr linkages. As such, they can not have available_externally
linkage.
This patch fixes that by adding implicit constant attribute to
file scope constexpr variables and constexpr static data members
in device compilation.
Differential Revision: https://reviews.llvm.org/D79237
Canonicalize on storing FP options in LangOptions instead of
redundantly in CodeGenOptions. Incorporate -ffast-math directly
into the values of those LangOptions rather than considering it
separately when building FPOptions. Build IR attributes from
those options rather than a mix of sources.
We should really simplify the driver/cc1 interaction here and have
the driver pass down options that cc1 directly honors. That can
happen in a follow-up, though.
Patch by Michele Scandale!
https://reviews.llvm.org/D80315
NoDebug attr does not totally eliminate debug info about a function when
inlining is enabled. This is inconsistent with when inlining is disabled.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D79967
If we're going to assume references are dereferenceable, we should also
assume they're aligned: otherwise, we can't actually dereference them.
See also D80072.
Differential Revision: https://reviews.llvm.org/D80166
The stub function is generated by compiler and its instructions have nothing
to do with the kernel source code.
Currently clang generates debug info for the stub function, which causes
confusion for the HIP debugger. For example, when users set break point
on a line of a kernel, the debugger should break on that line when the kernel is
executed and reaches that line, but instead the debugger breaks in the stub function.
This patch disables debug info for stub function for HIP.
Differential Revision: https://reviews.llvm.org/D79866
Summary:
- If the coerced type is still a pointer, it should be set with proper
parameter attributes, such as `noalias`, `nonnull`, and etc. Hoist
that (pointer) parameter attribute setting so that the coerced pointer
parameter could be marked properly.
Depends on D79394
Reviewers: rjmccall, kerbowa, yaxunl
Subscribers: jvesely, nhaehnle, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D79395
Move function emitDeferredDiags from Sema to DeferredDiagsEmitter since it
is only used by DeferredDiagsEmitter.
Also skip visited functions to avoid exponential compile time.
Differential Revision: https://reviews.llvm.org/D77028
Summary:
- `RegisterVar` has `void` return type and `size_t` in its variable size
parameter in HIP or CUDA 9.0+.
Reviewers: tra, yaxunl
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D77398
The main purpose of introducing these builtins is to add a range
metadata [1, 1025) on the work group size loaded from dispatch
ptr, which cannot be done by source code.
Differential Revision: https://reviews.llvm.org/D76772
Summary:
- Even though the bindless surface/texture interfaces are promoted,
there are still code using surface/texture references. For example,
[PR#26400](https://bugs.llvm.org/show_bug.cgi?id=26400) reports the
compilation issue for code using `tex2D` with texture references. For
better compatibility, this patch proposes the support of
surface/texture references.
- Due to the absent documentation and magic headers, it's believed that
`nvcc` does use builtins for texture support. From the limited NVVM
documentation[^nvvm] and NVPTX backend texture/surface related
tests[^test], it's believed that surface/texture references are
supported by replacing their reference types, which are annotated with
`device_builtin_surface_type`/`device_builtin_texture_type`, with the
corresponding handle-like object types, `cudaSurfaceObject_t` or
`cudaTextureObject_t`, in the device-side compilation. On the host
side, that global handle variables are registered and will be
established and updated later when corresponding binding/unbinding
APIs are called[^bind]. Surface/texture references are most like
device global variables but represented in different types on the host
and device sides.
- In this patch, the following changes are proposed to support that
behavior:
+ Refine `device_builtin_surface_type` and
`device_builtin_texture_type` attributes to be applied on `Type`
decl only to check whether a variable is of the surface/texture
reference type.
+ Add hooks in code generation to replace that reference types with
the correponding object types as well as all accesses to them. In
particular, `nvvm.texsurf.handle.internal` should be used to load
object handles from global reference variables[^texsurf] as well as
metadata annotations.
+ Generate host-side registration with proper template argument
parsing.
---
[^nvvm]: https://docs.nvidia.com/cuda/pdf/NVVM_IR_Specification.pdf
[^test]: https://raw.githubusercontent.com/llvm/llvm-project/master/llvm/test/CodeGen/NVPTX/tex-read-cuda.ll
[^bind]: See section 3.2.11.1.2 ``Texture reference API` in [CUDA C Programming Guide](https://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf).
[^texsurf]: According to NVVM IR, `nvvm.texsurf.handle` should be used. But, the current backend doesn't have that supported. We may revise that later.
Reviewers: tra, rjmccall, yaxunl, a.sidorin
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76365
HIP emits a device stub function for each kernel in host code.
The HIP debugger requires device stub function to have a different unmangled name as the kernel.
Currently the name of the device stub function is the mangled name with a postfix .stub. However,
this does not work with the HIP debugger since the unmangled name is the same as the kernel.
This patch adds prefix __device__stub__ to the unmangled name of the device stub before mangling,
therefore the device stub function has a valid mangled name which is different than the device kernel
name. The device side kernel name is kept unchanged. kernels with extern "C" also gets the prefix added
to the corresponding device stub function.
Differential Revision: https://reviews.llvm.org/D68578
This reverts commit 737394c490.
The fp-model test was failing on platforms that enable denormal flushing
based on -ffast-math. This needs to reset to IEEE, not the default in
these cases.
Change-Id: Ibbad32f66d0d0b89b9c1173a3a96fb1a570ddd89
The IR hasn't switched the default yet, so explicitly add the ieee
attributes.
I'm still not really sure how the target default denormal mode should
interact with -fno-unsafe-math-optimizations. The target may have
selected the default mode to be non-IEEE based on the flags or based
on its true behavior, but we don't know which is the case. Since the
only users of a non-IEEE mode without a flag still support IEEE mode,
just reset to IEEE.
Summary:
hip-pinned-shadow global var should remain in the final code object irrespective
of whether it is used or not within the code. Add it to used list, so that it
will not get eliminated when it is unused.
Reviewers: yaxunl, tra, hliao
Reviewed By: yaxunl
Subscribers: hliao, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D75402
norecurse function attr indicates the function is not called recursively
directly or indirectly.
Add norecurse to OpenCL functions, SYCL functions in device compilation
and CUDA/HIP kernels.
Although there is LLVM pass adding norecurse to functions, it only works
for whole-program compilation. Also FE adding norecurse can make that
pass run faster since functions with norecurse do not need to be checked
again.
Differential Revision: https://reviews.llvm.org/D73651
AMDGPU and x86 at least both have separate controls for whether
denormal results are flushed on output, and for whether denormals are
implicitly treated as 0 as an input. The current DAGCombiner use only
really cares about the input treatment of denormals.
Currently there are 4 different mechanisms for controlling denormal
flushing behavior, and about as many equivalent frontend controls.
- AMDGPU uses the fp32-denormals and fp64-f16-denormals subtarget features
- NVPTX uses the nvptx-f32ftz attribute
- ARM directly uses the denormal-fp-math attribute
- Other targets indirectly use denormal-fp-math in one DAGCombine
- cl-denorms-are-zero has a corresponding denorms-are-zero attribute
AMDGPU wants a distinct control for f32 flushing from f16/f64, and as
far as I can tell the same is true for NVPTX (based on the attribute
name).
Work on consolidating these into the denormal-fp-math attribute, and a
new type specific denormal-fp-math-f32 variant. Only ARM seems to
support the two different flush modes, so this is overkill for the
other use cases. Ideally we would error on the unsupported
positive-zero mode on other targets from somewhere.
Move the logic for selecting the flush mode into the compiler driver,
instead of handling it in cc1. denormal-fp-math/denormal-fp-math-f32
are now both cc1 flags, but denormal-fp-math-f32 is not yet exposed as
a user flag.
-cl-denorms-are-zero, -fcuda-flush-denormals-to-zero and
-fno-cuda-flush-denormals-to-zero will be mapped to
-fp-denormal-math-f32=ieee or preserve-sign rather than the old
attributes.
Stop emitting the denorms-are-zero attribute for the OpenCL flag. It
has no in-tree users. The meaning would also be target dependent, such
as the AMDGPU choice to treat this as only meaning allow flushing of
f32 and not f16 or f64. The naming is also potentially confusing,
since DAZ in other contexts refers to instructions implicitly treating
input denormals as zero, not necessarily flushing output denormals to
zero.
This also does not attempt to change the behavior for the current
attribute. The LangRef now states that the default is ieee behavior,
but this is inaccurate for the current implementation. The clang
handling is slightly hacky to avoid touching the existing
denormal-fp-math uses. Fixing this will be left for a future patch.
AMDGPU is still using the subtarget feature to control the denormal
mode, but the new attribute are now emitted. A future change will
switch this and remove the subtarget features.
The CUDA builtin library is apparently compiled in C++ mode, so the
assumption of convergent needs to be made in a typically non-SPMD
language. The functions in the library should still be assumed
convergent. Currently they are not, which is potentially incorrect and
this happens to work after the library is linked.
Summary:
- Fix a bug which misses the change for a variable to be set with
target-specific attributes.
Reviewers: yaxunl
Subscribers: jvesely, nhaehnle, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D63020
The linker options (e.g. pragma detect_mismatch) are intended for host
compilation only, therefore disable it for device compilation.
Differential Revision: https://reviews.llvm.org/D57829
Add this option to allow device side class type global variables
with non-trivial ctor/dtor. device side init/fini functions will
be emitted, which will be executed by HIP runtime when
the fat binary is loaded/unloaded.
This feature is to facilitate implementation of device side
sanitizer which requires global vars with non-trival ctors.
By default this option is disabled.
Differential Revision: https://reviews.llvm.org/D69268
Summary:
- HIP/CUDA host side needs to use device kernel symbol name to match the
device side binaries. Without a consistent naming between host- and
device-side compilations, it's risky that wrong device binaries are
executed. Consistent naming is usually not an issue until unnamed
types are used, especially the lambda. In this patch, the consistent
name mangling is addressed for the extended lambdas, i.e. the lambdas
annotated with `__device__`.
- In [Itanium C++ ABI][1], the mangling of the lambda is generally
unspecified unless, in certain cases, ODR rule is required to ensure
consisent naming cross TUs. The extended lambda is such a case as its
name may be part of a device kernel function, e.g., the extended
lambda is used as a template argument and etc. Thus, we need to force
ODR for extended lambdas as they are referenced in both device- and
host-side TUs. Furthermore, if a extended lambda is nested in other
(extended or not) lambdas, those lambdas are required to follow ODR
naming as well. This patch revises the current lambda mangle numbering
to force ODR from an extended lambda to all its parent lambdas.
- On the other side, the aforementioned ODR naming should not change
those lambdas' original linkages, i.e., we cannot replace the original
`internal` with `linkonce_odr`; otherwise, we may violate ODR in
general. This patch introduces a new field `HasKnownInternalLinkage`
in lambda data to decouple the current linkage calculation based on
mangling number assigned.
[1]: https://itanium-cxx-abi.github.io/cxx-abi/abi.html
Reviewers: tra, rsmith, yaxunl, martong, shafik
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D68818
llvm-svn: 375309
CUDA/HIP program may be compiled with -fopenmp. In this case, -fopenmp is only passed to host compilation
to take advantages of multi-threads computation.
CUDA/HIP and OpenMP both use Sema::DeviceCallGraph to store functions to be analyzed and remove them
once they decide the function is sure to be emitted. CUDA/HIP and OpenMP have different functions to determine
if a function is sure to be emitted.
To check host/device correctly for CUDA/HIP when -fopenmp is enabled, there needs a unified logic to determine
whether a function is to be emitted. The logic needs to be aware of both CUDA and OpenMP logic.
Differential Revision: https://reviews.llvm.org/D67837
llvm-svn: 374263
For consistency with normal instructions and clarity when reading IR,
it's best to print the %0, %1, ... names of function arguments in
definitions.
Also modifies the parser to accept IR in that form for obvious reasons.
llvm-svn: 367755
To enable a new implicit kernel argument,
increased the number of argument bytes from 48 to 56.
Reviewed By: yaxunl
Differential Revision: https://reviews.llvm.org/D63756
llvm-svn: 365643
This patch introduces support of hip_pinned_shadow variable for HIP.
A hip_pinned_shadow variable is a global variable with attribute hip_pinned_shadow.
It has external linkage on device side and has no initializer. It has internal
linkage on host side and has initializer or static constructor. It can be accessed
in both device code and host code.
This allows HIP runtime to implement support of HIP texture reference.
Differential Revision: https://reviews.llvm.org/D62738
llvm-svn: 364381
Enable 48-bytes of implicit arguments for HIP as well. Earlier it was enabled for OpenCL. This code is specific to AMDGPU target.
Differential Revision: https://reviews.llvm.org/D62244
llvm-svn: 363414
LLVM IR recently added a Type parameter to the byval Attribute, so that
when pointers become opaque and no longer have an element type the
information will still be present in IR.
For now the Type parameter is optional (which is why Clang didn't need
this change at the time), but it will become mandatory soon.
llvm-svn: 362652
Summary:
- By declaring device variables as `static`, we assume they won't be
addressable from the host side. Thus, no `externally_initialized` is
required.
Reviewers: yaxunl
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D62603
llvm-svn: 361994
Recently D60274 was introduced to allow lld to handle dependent libs. However current
usage of dependent libs (e.g. pragma comment(lib, *) in windows header files) are intended
for host only. Emitting the metadata in device IR causes link error in device path.
Until there is a way to different it dependent libs for device or host, metadata for dependent
libs should be emitted for host only. This patch enforces that.
Differential Revision: https://reviews.llvm.org/D62483
llvm-svn: 361880
Summary:
- `__constant__` variables should not be `hidden` as the linker may turn
them into `LOCAL` symbols.
Reviewers: yaxunl
Subscribers: jvesely, nhaehnle, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D61194
llvm-svn: 359344
Also for CUDA, we need to disable producing these fat binary functions when there is no GPU code.
Reviewers: yaxunl, tra
Differential Revision: https://reviews.llvm.org/D60141
llvm-svn: 357526
Skip producing the fat binary functions for HIP when no device code is present.
Reviewers: yaxunl
Differential Review: https://reviews.llvm.org/D60141
llvm-svn: 357520
Summary:
- A device functions could be used as a non-type template parameter in a
global/host function template. However, we should not try to retrieve that
device function and reference it in the host-side debug info as it's
only valid at device side.
Subscribers: aprantl, jdoerfert, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D58992
llvm-svn: 355551
Add .stub to kernel stub function name so that it is different from kernel
name in device code. This is necessary to let debugger find correct symbol
for kernel.
Differential Revision: https://reviews.llvm.org/D58518
llvm-svn: 354948
Add .stub to kernel stub function name so that it is different from kernel
name in device code. This is necessary to let debugger find correct symbol
for kernel
Differential Revision: https://reviews.llvm.org/D58518
llvm-svn: 354615
__hipRegisterFunction and __hipRegisterVar need to accept device side kernel and variable names
so that HIP runtime can associate kernel stub functions in host code with kernel symbols in fat binaries,
and associate shadow variables in host code with device variables in fat binaries.
Currently, clang assumes kernel functions and device variables have the same name as the kernel
stub functions and shadow variables. However, when host is compiled in windows with MSVC C++
ABI and device is compiled with Itanium C++ ABI (e.g. AMDGPU), kernels and device symbols in fat
binary are mangled differently than host.
This patch gets the device side kernel and variable name by mangling them in the mangle context
of aux target.
Differential Revision: https://reviews.llvm.org/D58163
llvm-svn: 354004
Summary:
Added ability to generate correct debug info data about the variable
address class. Currently, for all the locals and globals the default
values are used, ADDR_local_space(6) for locals and ADDR_global_space(5)
for globals. The values are taken from the table in
https://docs.nvidia.com/cuda/archive/10.0/ptx-writers-guide-to-interoperability/index.html#cuda-specific-dwarf.
We need to emit correct data for address classes of, at least, shared
and constant globals. Currently, all these variables are treated by
the cuda-gdb debugger as the variables in the global address space
and, thus, it require manual data type casting.
Reviewers: echristo, probinson
Subscribers: jholewinski, aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D57162
llvm-svn: 353204
rC352620 caused regressions because it copied floating point format from
aux target.
floating point format decides whether extended long double is supported.
It is x86_fp80 on x86 but IEEE double on amdgcn.
Document usage of long doubel type in HIP programming guide
https://github.com/ROCm-Developer-Tools/HIP/pull/890
Differential Revision: https://reviews.llvm.org/D57527
llvm-svn: 352801
Instead of calling CUDA runtime to arrange function arguments,
the new API constructs arguments in a local array and the kernels
are launched with __cudaLaunchKernel().
The old API has been deprecated and is expected to go away
in the next CUDA release.
Differential Revision: https://reviews.llvm.org/D57488
llvm-svn: 352799
This fixes compiler crash when we attempted to compile this code:
extern __device__ int data;
__device__ int data = 1;
Differential Revision: https://reviews.llvm.org/D56033
llvm-svn: 349981
The host-side code can't (and should not) access the values that may
only exist on the device side. E.g. address of a __device__ function
does not exist on the host side as we don't generate the code for it there.
Differential Revision: https://reviews.llvm.org/D55663
llvm-svn: 349087
This reverts commit https://reviews.llvm.org/rL344150 which causes
MachineOutliner related failures on the ppc64le multistage buildbot.
llvm-svn: 344526
This is currently a clang extension and a resolution
of the defect report in the C++ Standard.
Differential Revision: https://reviews.llvm.org/D46441
llvm-svn: 344150
This patch renames -f{no-}cuda-rdc to -f{no-}gpu-rdc and keeps the original
options as aliases. When -fgpu-rdc is off,
clang will assume the device code in each translation unit does not call
external functions except those in the device library, therefore it is possible
to compile the device code in each translation unit to self-contained kernels
and embed them in the host object, so that the host object behaves like
usual host object which can be linked by lld.
The benefits of this feature is: 1. allow users to create static libraries which
can be linked by host linker; 2. amortized device code linking time.
This patch modifies HIP action builder to insert actions for linking device
code and generating HIP fatbin, and pass HIP fatbin to host backend action.
It extracts code for constructing command for generating HIP fatbin as
a function so that it can be reused by early finalization. It also modifies
codegen of HIP host constructor functions to embed the device fatbin
when it is available.
Differential Revision: https://reviews.llvm.org/D52377
llvm-svn: 343611
Previously clang considered function variants from both sides of
compilation and that resulted in picking up wrong deallocation function.
Differential Revision: https://reviews.llvm.org/D51808
llvm-svn: 342749
Different shared libraries contain different fat binary, which is stored in a global variable
__hip_gpubin_handle. Since different compilation units share the same fat binary, this
variable has linkonce linkage. However, it should not be merged across different shared
libraries.
This patch set the visibility of the global variable to be hidden, which will make it invisible
in the shared library, therefore preventing it from being merged.
Differential Revision: https://reviews.llvm.org/D50596
llvm-svn: 340056
The way address space declarations for builtins currently work
is nearly useless. The code assumes the address spaces used for
builtins is a confusingly named "target address space" from user
code using __attribute__((address_space(N))) that matches
the builtin declaration. There's no way to use this to declare
a builtin that returns a language specific address space.
The terminology used is highly cofusing since it has nothing
to do with the the address space selected by the target to use
for a language address space.
This feature is essentially unused as-is. AMDGPU and NVPTX
are the only in-tree targets attempting to use this. The AMDGPU
builtins certainly do not behave as intended (i.e. all of the
builtins returning pointers can never compile because the numbered
address space never matches the expected named address space).
The NVPTX builtins are missing tests for some, and the others
seem to rely on an implicit addrspacecast.
Change the used address space for builtins based on a target
hook to allow using a language address space for a builtin.
This allows the same builtin declaration to be used for multiple
languages with similarly purposed address spaces (e.g. the same
AMDGPU builtin can be used in OpenCL and CUDA even though the
constant address spaces are arbitarily different).
This breaks the possibility of using arbitrary numbered
address spaces alongside the named address spaces for builtins.
If this is an issue we probably need to introduce another builtin
declaration character to distinguish language address spaces from
so-called "target address spaces".
llvm-svn: 338707
CUDA 8.0 E.3.9.4 says: Within the body of a __device__ or __global__
function, only __shared__ variables or variables without any device
memory qualifiers may be declared with static storage class.
It is unclear how a function-scope non-const static variable
without device memory qualifier is implemented, therefore only static
const variable without device memory qualifier is allowed, which
can be emitted as a global variable in constant address space.
Currently clang only allows function-scope static variable with
__shared__ qualifier.
This patch also allows function-scope static const variable without
device memory qualifier and emits it as a global variable in constant
address space.
Differential Revision: https://reviews.llvm.org/D49931
llvm-svn: 338188
HIP generates one fat binary for all devices after linking. However, for each compilation
unit a ctor function is emitted which register the same fat binary. Measures need to be
taken to make sure the fat binary is only registered once.
Currently each ctor function calls __hipRegisterFatBinary and stores the returned value
to __hip_gpubin_handle. This patch changes the linkage of __hip_gpubin_handle to be linkonce
so that they are shared between LLVM modules. Then this patch adds check of value of
__hip_gpubin_handle to make sure __hipRegisterFatBinary is only called once. The code
is equivalent to
void *_gpubin_handle;
void ctor() {
if (__hip_gpubin_handle == 0) {
__hip_gpubin_handle = __hipRegisterFatBinary(...);
}
// register kernels and variables.
}
The patch also does similar change to dtors so that __hipUnregisterFatBinary
is called once.
Differential Revision: https://reviews.llvm.org/D49083
llvm-svn: 337631
This matches the way NVCC does it. Doing module cleanup at global
destructor phase used to work, but is, apparently, too late for
the CUDA runtime in CUDA-9.2, which ends up crashing with double-free.
Differential Revision: https://reviews.llvm.org/D48613
llvm-svn: 335763
There are HIP applications e.g. Tensorflow 1.3 using amdgpu kernel attributes, however
currently they are only allowed on OpenCL kernel functions.
This patch will allow amdgpu kernel attributes to be applied to CUDA/HIP __global__
functions.
Differential Revision: https://reviews.llvm.org/D47958
llvm-svn: 334561
Currently clang set kernel calling convention for CUDA/HIP after
arranging function, which causes incorrect kernel function type since
it depends on calling convention.
This patch moves setting kernel convention before arranging
function.
Differential Revision: https://reviews.llvm.org/D47733
llvm-svn: 334457
CGM.GetAddrOfConstantCString() sets the adress of the created GlobalValue
to unnamed. When emitting the object file LLVM will mark the surrounding
section as SHF_MERGE iff the string is nul-terminated and contains no
other nuls (see IsNullTerminatedString). This results in problems when
saving temporaries because LLVM doesn't set an EntrySize, so reading in
the serialized assembly file fails.
This never happened for the GPU binaries because they usually contain
a nul-character somewhere. Instead this only affected the module ID
when compiling relocatable device code.
However, this points to a potentially larger problem: If we put a
constant string into a named section, we really want the data to end
up in that section in the object file. To avoid LLVM merging sections
this patch unmarks the GlobalVariable's address as unnamed which also
fixes the problem of invalid serialized assembly files when saving
temporaries.
Differential Revision: https://reviews.llvm.org/D47902
llvm-svn: 334281
CUDA/HIP does not support RTTI on device side, therefore there
is no point of emitting type info when compiling for device.
Emitting type info for device not only clutters the IR with useless
global variables, but also causes undefined symbol at linking
since vtable for cxxabiv1::class_type_info has external linkage.
Differential Revision: https://reviews.llvm.org/D47694
llvm-svn: 334021
To support linking device code in different source files, it is necessary to
embed fat binary at host linking stage.
This patch emits an external symbol for fat binary in host codegen, then
embed the fat binary by lld through a linker script.
Differential Revision: https://reviews.llvm.org/D46472
llvm-svn: 332724
HIP is a language similar to CUDA (https://github.com/ROCm-Developer-Tools/HIP/blob/master/docs/markdown/hip_kernel_language.md ).
The language syntax is very similar, which allows a hip program to be compiled as a CUDA program by Clang. The main difference
is the host API. HIP has a set of vendor neutral host API which can be implemented on different platforms. Currently there is open source
implementation of HIP runtime on amdgpu target (https://github.com/ROCm-Developer-Tools/HIP).
This patch adds support of input kind and language standard hip.
When hip file is compiled, both LangOpts.CUDA and LangOpts.HIP is turned on. This allows compilation of hip program as CUDA
in most cases and only special handling of hip program is needed LangOpts.HIP is checked.
This patch also adds support of kernel launching of HIP program using HIP host API.
When -x hip is not specified, there is no behaviour change for CUDA.
Patch by Greg Rodgers.
Revised and lit test added by Yaxun Liu.
Differential Revision: https://reviews.llvm.org/D44984
llvm-svn: 330790
Some targets need special LLVM calling convention for CUDA kernel.
This patch does that through a TargetCodeGenInfo hook.
It only affects amdgcn target.
Patch by Greg Rodgers.
Revised and lit tests added by Yaxun Liu.
Differential Revision: https://reviews.llvm.org/D45223
llvm-svn: 330447
nvcc generates a unique registration function for each object file
that contains relocatable device code. Unique names are achieved
with a module id that is also reflected in the function's name.
Differential Revision: https://reviews.llvm.org/D42922
llvm-svn: 330425
It means the same thing as -mllvm; there isn't any reason to have two
options which do the same thing.
Differential Revision: https://reviews.llvm.org/D45109
llvm-svn: 329965
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
This reverts r328795 which introduced an issue with referencing __global__
function templates. More details in the original review D44747.
llvm-svn: 329099
CUDA shared variable should be initialized with undef.
Patch by Greg Rodgers.
Revised and lit test added by Yaxun Liu.
Differential Revision: https://reviews.llvm.org/D44985
llvm-svn: 328994
This patch sets target specific calling convention for CUDA kernels in IR.
Patch by Greg Rodgers.
Revised and lit test added by Yaxun Liu.
Differential Revision: https://reviews.llvm.org/D44747
llvm-svn: 328795
We should (almost) never consider a device-side declaration to match a
library builtin functio. Otherwise clang may ignore the implementation
provided by the CUDA headers and emit clang's idea of the builtin.
Differential Revision: https://reviews.llvm.org/D42319
llvm-svn: 323239
Re-commit r303463 now that LLVM is fixed and adjust some lit tests.
llvm::TargetLibraryInfo needs to know the size of wchar_t to work on
functions like `wcslen`. This patch changes clang to always emit the
wchar_size module flag (it would only do so for ARM previously).
This also adds an `assert()` to ensure the LLVM defaults based on the
target triple are in sync with clang.
Differential Revision: https://reviews.llvm.org/D32982
llvm-svn: 303478
FPContractModeKind is the codegen option flag which is already ternary (off,
on, fast). This makes it universally the type for the contractable info
across the front-end:
* In FPOptions (i.e. in the Sema + in the expression nodes).
* In LangOpts::DefaultFPContractMode which is the option that initializes
FPOptions in the Sema.
Another way to look at this change is that before fp-contractable on/off were
the only states handled to the front-end:
* For "on", FMA folding was performed by the front-end
* For "fast", we simply forwarded the flag to TargetOptions to handle it in
LLVM
Now off/on/fast are all exposed because for fast we will generate
fast-math-flags during CodeGen.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
---
This is a recommit of r299027 with an adjustment to the test
CodeGenCUDA/fp-contract.cu. The test assumed that even
though -ffp-contract=on is passed FE-based folding of FMA won't happen.
This is obviously wrong since the user is asking for this explicitly with the
option. CUDA is different that -ffp-contract=fast is on by default.
The test used to "work" because contract=fast and contract=on were maintained
separately and we didn't fold in the FE because contract=fast was on due to
the target-default. This patch consolidates the contract=on/fast/off state
into a ternary state hence the change in behavior.
---
Differential Revision: https://reviews.llvm.org/D31167
llvm-svn: 299033
Summary:
Now when you ask clang to link in a bitcode module, you can tell it to
set attributes on that module's functions to match what we would have
set if we'd emitted those functions ourselves.
This is particularly important for fast-math attributes in CUDA
compilations.
Each CUDA compilation links in libdevice, a bitcode library provided by
nvidia as part of the CUDA distribution. Without this patch, if we have
a user-function F that is compiled with -ffast-math that calls a
function G from libdevice, F will have the unsafe-fp-math=true (etc.)
attributes, but G will have no attributes.
Since F calls G, the inliner will merge G's attributes into F's. It
considers the lack of an unsafe-fp-math=true attribute on G to be
tantamount to unsafe-fp-math=false, so it "merges" these by setting
unsafe-fp-math=false on F.
This then continues up the call graph, until every function that
(transitively) calls something in libdevice gets unsafe-fp-math=false
set, thus disabling fastmath in almost all CUDA code.
Reviewers: echristo
Subscribers: hfinkel, llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D28538
llvm-svn: 293097
* __host__ __device__ functions are no longer considered to be
redeclarations of __host__ or __device__ functions. This prevents
unintentional merging of target attributes across them.
* Function target attributes are not considered (and must match) during
explicit instantiation and specialization of function templates.
Differential Revision: https://reviews.llvm.org/D25809
llvm-svn: 288962
Summary: This matches the idiom we use for our other CUDA wrapper headers.
Reviewers: tra
Subscribers: beanz, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D24978
llvm-svn: 283679
Summary:
This prevents clang from emitting 'invoke's and catch statements.
Things previously mostly worked thanks to TryToMarkNoThrow() in
CodeGenFunction. But this is not a proper IPO, and it doesn't properly
handle cases like mutual recursion.
Fixes bug 30593.
Reviewers: tra
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D25166
llvm-svn: 283272
Summary:
We'd attempted to allow this, but turns out we were doing a very bad
job. :)
Making this work properly would be a giant change in clang. For
example, we'd need to make CXXRecordDecl::getDestructor()
context-sensitive, because the destructor you end up with depends on
where you're calling it from.
For now (and hopefully for ever), just disallow overloading of
destructors in CUDA.
Reviewers: rsmith
Subscribers: cfe-commits, tra
Differential Revision: https://reviews.llvm.org/D24571
llvm-svn: 283120
Summary:
Some function calls in CUDA are allowed to appear in
semantically-correct programs but are an error if they're ever
codegen'ed. Specifically, a host+device function may call a host
function, but it's an error if such a function is ever codegen'ed in
device mode (and vice versa).
Previously, clang made no attempt to catch these errors. For the most
part, they would be caught by ptxas, and reported as "call to unknown
function 'foo'".
Now we catch these errors and report them the same as we report other
illegal calls (e.g. a call from a host function to a device function).
This has a small change in error-message behavior for calls that were
previously disallowed (e.g. calls from a host to a device function).
Previously, we'd catch disallowed calls fairly early, before doing
additional semantic checking e.g. of the call's arguments. Now we catch
these illegal calls at the very end of our semantic checks, so we'll
only emit a "illegal CUDA call" error if the call is otherwise
well-formed.
Reviewers: tra, rnk
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D23242
llvm-svn: 278759
This matches the way nvcc encapsulates GPU binaries into host object file.
Now cuobjdump can deal with clang-compiled object files.
Differential Revision: https://reviews.llvm.org/D23429
llvm-svn: 278549
Summary:
Before this patch, we computed the offsets in memory of args passed to
GPU kernel functions by throwing all of the args into an LLVM struct.
clang emits packed llvm structs basically whenever it feels like it, and
packed structs have alignment 1. So we cannot rely on the llvm type's
alignment matching the C++ type's alignment.
This patch fixes our codegen so we always respect the clang types'
alignments.
Reviewers: rnk
Subscribers: cfe-commits, tra
Differential Revision: https://reviews.llvm.org/D22879
llvm-svn: 276927
Summary:
This lets LLVM perform IPO over these functions. In particular, it
allows LLVM to emit ld.global.nc for loads to __restrict pointers in
kernels that are never written to.
Reviewers: rsmith
Subscribers: cfe-commits, tra
Differential Revision: http://reviews.llvm.org/D21337
llvm-svn: 274261
Summary:
This is particularly important because a some convergent CUDA intrinsics
(e.g. __shfl_down) are implemented in terms of inline asm.
Reviewers: tra
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D20836
llvm-svn: 271336
Some people have weird CI systems that run each test subdirectory
independently without access to other parallel trees.
Unfortunately, this means we have to suffer some duplication until Art
can sort out how to share these types.
llvm-svn: 270164
According to Cuda Programming guide (v7.5, E2.3.1):
> __device__, __constant__ and __shared__ variables defined in namespace
> scope, that are of class type, cannot have a non-empty constructor or a
> non-empty destructor.
Clang already deals with device-side constructors (see D15305).
This patch enforces similar rules for destructors.
Differential Revision: http://reviews.llvm.org/D20140
llvm-svn: 270108
Codegen tests for device-side variable initialization are subset of test
cases used to verify Sema's part of the job.
Including CodeGenCUDA/device-var-init.cu from SemaCUDA makes it easier to
keep both sides in sync.
Differential Revision: http://reviews.llvm.org/D20139
llvm-svn: 270107
This matches default nvcc behavior and gives substantial
performance boost on GPU where fmad is much cheaper compared to add+mul.
Differential Revision: http://reviews.llvm.org/D20341
llvm-svn: 270094
Allow only empty constructors for local __shared__ variables in a way
identical to restrictions imposed on dynamic initializers for global
variables on device.
Differential Revision: http://reviews.llvm.org/D20039
llvm-svn: 268982
According to CUDA programming guide (v7.5):
> E.2.9.4: Within the body of a device or global function, only
> shared variables may be declared with static storage class.
Differential Revision: http://reviews.llvm.org/D20034
llvm-svn: 268962
__global__ functions are a special case in CUDA.
Even when the symbol would normally not be externally
visible according to C++ rules, they still must be visible
in CUDA GPU object so host-side stub can launch them.
Differential Revision: http://reviews.llvm.org/D19748
llvm-svn: 268299
Summary:
Setting this flag causes all functions are annotated with the
"nvvm-f32ftz" = "true" attribute.
In addition, we annotate the module with "nvvm-reflect-ftz" set
to 0 or 1, depending on whether -cuda-flush-denormals-to-zero is set.
This is read by the NVVMReflect pass.
Reviewers: tra, rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18671
llvm-svn: 265435
We already have this flag in most of the file, but we need it everywhere
else, to disable the NVVMReflect pass, which we're explicitly checking
doesn't run here. (Upcoming changes to llvm will cause it to be run.)
llvm-svn: 264969
Summary:
* -fcuda-target-overloads
Previously unconditionally set to true by the driver. Necessary for
correct functioning of the compiler -- our CUDA headers wrapper won't
compile without this.
* -fcuda-disable-target-call-checks
Previously unconditionally set to true by the driver. Necessary to
compile almost any external CUDA code -- almost all libraries assume
that host+device code can call host or device functions.
* -fcuda-allow-host-calls-from-host-device
No effect when target overloading is enabled.
Reviewers: tra
Subscribers: rsmith, cfe-commits
Differential Revision: http://reviews.llvm.org/D18416
llvm-svn: 264739
Summary:
Previously we were using the codegen test to ensure that we choose the
right overload. But we can do this within sema, with a bit of
cleverness.
I left the constructor/destructor checks in CodeGen, because these
overloads (particularly on the destructors) are hard to check in Sema.
Reviewers: tra
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18386
llvm-svn: 264207
* Run cc with -triple x86_64-linux-gnu to make symbol mangling predictable.
* Use temporary file as a fake GPU input so its content
does not interfere with pattern matching.
llvm-svn: 262516
... and register them with CUDA runtime.
This is needed for commonly used cudaMemcpy*() APIs that use address of
host-side shadow to access their counterparts on device side.
Fixes PR26340
Differential Revision: http://reviews.llvm.org/D17779
llvm-svn: 262498
Summary:
This is important for e.g. the following case:
void sync() { __syncthreads(); }
void foo() {
do_something();
sync();
do_something_else():
}
Without this change, if the optimizer does not inline sync() (which it
won't because __syncthreads is also marked as noduplicate, for now
anyway), it is free to perform optimizations on sync() that it would not
be able to perform on __syncthreads(), because sync() is not marked as
convergent.
Similarly, we need a notion of convergent calls, since in the case when
we can't statically determine a call's target(s), we need to know
whether it's safe to perform optimizations around the call.
This change is conservative; the optimizer will remove these attrs where
it can, see r260318, r260319.
Reviewers: majnemer
Subscribers: cfe-commits, jhen, echristo, tra
Differential Revision: http://reviews.llvm.org/D17056
llvm-svn: 261779
This is an artefact of split-mode CUDA compilation that we need to
mimic. HD functions are sometimes allowed to call H or D functions. Due
to split compilation mode device-side compilation will not see host-only
function and thus they will not be considered at all. For clang both H
and D variants will become function overloads visible to
compiler. Normally target attribute is considered only if C++ rules can
not determine which function is better. However in this case we need to
ignore functions that would not be present during current compilation
phase before we apply normal overload resolution rules.
Changes:
* introduced another level of call preference to better describe
possible call combinations.
* removed WrongSide functions from consideration if the set contains
SameSide function.
* disabled H->D, D->H and G->H calls. These combinations are
not allowed by CUDA and we were reluctantly allowing them to work
around device-side calls to math functions in std namespace.
We no longer need it after r258880.
Differential Revision: http://reviews.llvm.org/D16870
llvm-svn: 260697
Summary:
We can't do the right thing, since there's no right thing to do, but at
least we can not crash the compiler.
Reviewers: majnemer, rnk
Subscribers: cfe-commits, jhen, tra
Differential Revision: http://reviews.llvm.org/D17103
llvm-svn: 260479
In general CUDA does not allow dynamic initialization of
global device-side variables. One exception is that CUDA allows
records with empty constructors as described in section E2.2.1 of
CUDA 7.5 Programming guide.
This patch applies initializer checks for all device-side variables.
Empty constructors are accepted, but no code is generated for them.
Differential Revision: http://reviews.llvm.org/D15305
llvm-svn: 259592
Summary:
This is necessary to prevent llvm from generating stacksave intrinsics
around this alloca. NVVM doesn't have a stack, and we don't handle said
intrinsics.
Reviewers: rnk, echristo
Subscribers: cfe-commits, jhen, tra
Differential Revision: http://reviews.llvm.org/D16664
llvm-svn: 259122
Summary:
These aliases are done to support inline asm, but there's nothing we can
do: NVPTX doesn't support aliases.
Reviewers: tra
Subscribers: cfe-commits, jhen, echristo
Differential Revision: http://reviews.llvm.org/D16501
llvm-svn: 258734
C++ emits vtables for classes that have key function present in the
current TU. While we compile CUDA the fact that key function was found
in this TU does not mean that we are going to generate code for it. E.g.
vtable for a class with host-only methods should not (and can not) be
generated on device side, because we'll never generate code for them
during device-side compilation.
This patch adds an extra CUDA-specific check during key method computation
and filters out potential key methods that are not suitable for this side
of CUDA compilation.
When we codegen vtable, entries for unsuitable methods are set to null.
Differential Revision: http://reviews.llvm.org/D15309
llvm-svn: 255911