Commit Graph

160 Commits

Author SHA1 Message Date
Yaxun (Sam) Liu 2c31aa2de1 Speed up deferred diagnostic emitter
Move function emitDeferredDiags from Sema to DeferredDiagsEmitter since it
is only used by DeferredDiagsEmitter.

Also skip visited functions to avoid exponential compile time.

Differential Revision: https://reviews.llvm.org/D77028
2020-04-06 13:07:43 -04:00
Michael Liao b952d799ca [cuda][hip] Fix `RegisterVar` function prototype.
Summary:
- `RegisterVar` has `void` return type and `size_t` in its variable size
  parameter in HIP or CUDA 9.0+.

Reviewers: tra, yaxunl

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D77398
2020-04-03 12:57:09 -04:00
Matt Arsenault ce2258c1cd clang/AMDGPU: Stop setting old denormal subtarget features 2020-04-02 17:17:12 -04:00
Yaxun (Sam) Liu 369e26ca9e [AMDGPU] Add __builtin_amdgcn_workgroup_size_x/y/z
The main purpose of introducing these builtins is to add a range
metadata [1, 1025) on the work group size loaded from dispatch
ptr, which cannot be done by source code.

Differential Revision: https://reviews.llvm.org/D76772
2020-03-28 01:03:20 -04:00
Michael Liao 5be9b8cbe2 [cuda][hip] Add CUDA builtin surface/texture reference support.
Summary: - Re-commit after fix Sema checks on partial template specialization.

Reviewers: tra, rjmccall, yaxunl, a.sidorin

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76365
2020-03-27 17:18:49 -04:00
Artem Belevich fe8063e1a0 Revert "[cuda][hip] Add CUDA builtin surface/texture reference support."
This reverts commit 6a9ad5f3f4.
The patch breaks CUDA copmilation.

Differential Revision: https://reviews.llvm.org/D76365
2020-03-27 10:01:38 -07:00
Michael Liao 6a9ad5f3f4 [cuda][hip] Add CUDA builtin surface/texture reference support.
Summary:
- Even though the bindless surface/texture interfaces are promoted,
  there are still code using surface/texture references. For example,
  [PR#26400](https://bugs.llvm.org/show_bug.cgi?id=26400) reports the
  compilation issue for code using `tex2D` with texture references. For
  better compatibility, this patch proposes the support of
  surface/texture references.
- Due to the absent documentation and magic headers, it's believed that
  `nvcc` does use builtins for texture support. From the limited NVVM
  documentation[^nvvm] and NVPTX backend texture/surface related
  tests[^test], it's believed that surface/texture references are
  supported by replacing their reference types, which are annotated with
  `device_builtin_surface_type`/`device_builtin_texture_type`, with the
  corresponding handle-like object types, `cudaSurfaceObject_t` or
  `cudaTextureObject_t`, in the device-side compilation. On the host
  side, that global handle variables are registered and will be
  established and updated later when corresponding binding/unbinding
  APIs are called[^bind]. Surface/texture references are most like
  device global variables but represented in different types on the host
  and device sides.
- In this patch, the following changes are proposed to support that
  behavior:
  + Refine `device_builtin_surface_type` and
    `device_builtin_texture_type` attributes to be applied on `Type`
    decl only to check whether a variable is of the surface/texture
    reference type.
  + Add hooks in code generation to replace that reference types with
    the correponding object types as well as all accesses to them. In
    particular, `nvvm.texsurf.handle.internal` should be used to load
    object handles from global reference variables[^texsurf] as well as
    metadata annotations.
  + Generate host-side registration with proper template argument
    parsing.

---
[^nvvm]: https://docs.nvidia.com/cuda/pdf/NVVM_IR_Specification.pdf
[^test]: https://raw.githubusercontent.com/llvm/llvm-project/master/llvm/test/CodeGen/NVPTX/tex-read-cuda.ll
[^bind]: See section 3.2.11.1.2 ``Texture reference API` in [CUDA C Programming Guide](https://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf).
[^texsurf]: According to NVVM IR, `nvvm.texsurf.handle` should be used.  But, the current backend doesn't have that supported. We may revise that later.

Reviewers: tra, rjmccall, yaxunl, a.sidorin

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76365
2020-03-26 14:44:52 -04:00
Yaxun (Sam) Liu 60963fa630 [HIP] Let clang recognize .hip extension
Differential Revision: https://reviews.llvm.org/D76039
2020-03-17 11:22:55 -04:00
Yaxun (Sam) Liu 0ffb12ca67 [HIP] Mark kernels with uniform-work-group-size=true
Differential Revision: https://reviews.llvm.org/D76076
2020-03-13 06:56:56 -04:00
Yaxun (Sam) Liu 22c457a869 [HIP] Fix device stub name
HIP emits a device stub function for each kernel in host code.

The HIP debugger requires device stub function to have a different unmangled name as the kernel.

Currently the name of the device stub function is the mangled name with a postfix .stub. However,
this does not work with the HIP debugger since the unmangled name is the same as the kernel.

This patch adds prefix __device__stub__ to the unmangled name of the device stub before mangling,
therefore the device stub function has a valid mangled name which is different than the device kernel
name. The device side kernel name is kept unchanged. kernels with extern "C" also gets the prefix added
to the corresponding device stub function.

Differential Revision: https://reviews.llvm.org/D68578
2020-03-09 16:40:05 -04:00
Matt Arsenault a4e71f01c0 Assume ieee behavior without denormal-fp-math attribute 2020-03-07 12:10:56 -05:00
Matt Arsenault 00b2a9df45 Reapply "clang: Treat ieee mode as the default for denormal-fp-math"
This reverts commit 737394c490.

The fp-model test was failing on platforms that enable denormal flushing
based on -ffast-math. This needs to reset to IEEE, not the default in
these cases.

Change-Id: Ibbad32f66d0d0b89b9c1173a3a96fb1a570ddd89
2020-03-06 11:46:55 -08:00
Jeremy Morse 737394c490 Revert "clang: Treat ieee mode as the default for denormal-fp-math"
This reverts commit c64ca93053.

This patch tripped a few build bots:

  http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/24703/
  http://lab.llvm.org:8011/builders/clang-cmake-x86_64-avx2-linux/builds/13465/
  http://lab.llvm.org:8011/builders/clang-with-lto-ubuntu/builds/15994/

Reverting to clear the bots.
2020-03-05 10:55:24 +00:00
Matt Arsenault c64ca93053 clang: Treat ieee mode as the default for denormal-fp-math
The IR hasn't switched the default yet, so explicitly add the ieee
attributes.

I'm still not really sure how the target default denormal mode should
interact with -fno-unsafe-math-optimizations. The target may have
selected the default mode to be non-IEEE based on the flags or based
on its true behavior, but we don't know which is the case. Since the
only users of a non-IEEE mode without a flag still support IEEE mode,
just reset to IEEE.
2020-03-04 23:34:02 -05:00
hsmahesha cac068600e [HIP] Make sure, unused hip-pinned-shadow global var is kept within device code
Summary:
hip-pinned-shadow global var should remain in the final code object irrespective
of whether it is used or not within the code. Add it to used list, so that it
will not get eliminated when it is unused.

Reviewers: yaxunl, tra, hliao

Reviewed By: yaxunl

Subscribers: hliao, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D75402
2020-03-04 10:54:26 +05:30
Yaxun (Sam) Liu a57d9652a0 Make __builtin_amdgcn_dispatch_ptr dereferenceable and align at 4
Differential Revision: https://reviews.llvm.org/D75028
2020-02-25 13:58:20 -05:00
Yaxun (Sam) Liu fb44b9db95 [OpenCL][CUDA][HIP][SYCL] Add norecurse
norecurse function attr indicates the function is not called recursively
directly or indirectly.

Add norecurse to OpenCL functions, SYCL functions in device compilation
and CUDA/HIP kernels.

Although there is LLVM pass adding norecurse to functions, it only works
for whole-program compilation. Also FE adding norecurse can make that
pass run faster since functions with norecurse do not need to be checked
again.

Differential Revision: https://reviews.llvm.org/D73651
2020-02-16 20:41:00 -05:00
Matt Arsenault a3c814d234 Separately track input and output denormal mode
AMDGPU and x86 at least both have separate controls for whether
denormal results are flushed on output, and for whether denormals are
implicitly treated as 0 as an input. The current DAGCombiner use only
really cares about the input treatment of denormals.
2020-02-04 12:59:21 -05:00
Matt Arsenault a4451d88ee Consolidate internal denormal flushing controls
Currently there are 4 different mechanisms for controlling denormal
flushing behavior, and about as many equivalent frontend controls.

- AMDGPU uses the fp32-denormals and fp64-f16-denormals subtarget features
- NVPTX uses the nvptx-f32ftz attribute
- ARM directly uses the denormal-fp-math attribute
- Other targets indirectly use denormal-fp-math in one DAGCombine
- cl-denorms-are-zero has a corresponding denorms-are-zero attribute

AMDGPU wants a distinct control for f32 flushing from f16/f64, and as
far as I can tell the same is true for NVPTX (based on the attribute
name).

Work on consolidating these into the denormal-fp-math attribute, and a
new type specific denormal-fp-math-f32 variant. Only ARM seems to
support the two different flush modes, so this is overkill for the
other use cases. Ideally we would error on the unsupported
positive-zero mode on other targets from somewhere.

Move the logic for selecting the flush mode into the compiler driver,
instead of handling it in cc1. denormal-fp-math/denormal-fp-math-f32
are now both cc1 flags, but denormal-fp-math-f32 is not yet exposed as
a user flag.

-cl-denorms-are-zero, -fcuda-flush-denormals-to-zero and
-fno-cuda-flush-denormals-to-zero will be mapped to
-fp-denormal-math-f32=ieee or preserve-sign rather than the old
attributes.

Stop emitting the denorms-are-zero attribute for the OpenCL flag. It
has no in-tree users. The meaning would also be target dependent, such
as the AMDGPU choice to treat this as only meaning allow flushing of
f32 and not f16 or f64. The naming is also potentially confusing,
since DAZ in other contexts refers to instructions implicitly treating
input denormals as zero, not necessarily flushing output denormals to
zero.

This also does not attempt to change the behavior for the current
attribute. The LangRef now states that the default is ieee behavior,
but this is inaccurate for the current implementation. The clang
handling is slightly hacky to avoid touching the existing
denormal-fp-math uses. Fixing this will be left for a future patch.

AMDGPU is still using the subtarget feature to control the denormal
mode, but the new attribute are now emitted. A future change will
switch this and remove the subtarget features.
2020-01-17 20:09:53 -05:00
Yaxun (Sam) Liu 9f2d8b5c0c [HIP] Add option --gpu-max-threads-per-block=n
Add this option to change the default launch bounds.

Differential Revision: https://reviews.llvm.org/D71221
2020-01-07 11:18:00 -05:00
Matt Arsenault e531750c6c clang: Add -fconvergent-functions flag
The CUDA builtin library is apparently compiled in C++ mode, so the
assumption of convergent needs to be made in a typically non-SPMD
language. The functions in the library should still be assumed
convergent. Currently they are not, which is potentially incorrect and
this happens to work after the library is linked.
2019-11-19 23:20:15 +05:30
Michael Liao 0a220de9e9 [HIP] Fix visibility for 'extern' device variables.
Summary:
- Fix a bug which misses the change for a variable to be set with
  target-specific attributes.

Reviewers: yaxunl

Subscribers: jvesely, nhaehnle, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D63020
2019-11-05 14:19:32 -05:00
Michael Liao 15140e4bac [hip] Enable pointer argument lowering through coercing type.
Reviewers: tra, rjmccall, yaxunl

Subscribers: jvesely, nhaehnle, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D69826
2019-11-05 13:05:05 -05:00
Yaxun (Sam) Liu 4264e7bbfd [CUDA][HIP] Disable emitting llvm.linker.options in device compilation
The linker options (e.g. pragma detect_mismatch) are intended for host
compilation only, therefore disable it for device compilation.

Differential Revision: https://reviews.llvm.org/D57829
2019-11-04 23:21:39 -05:00
Yaxun (Sam) Liu 68f5ca4e19 [HIP] Add option -fgpu-allow-device-init
Add this option to allow device side class type global variables
with non-trivial ctor/dtor. device side init/fini functions will
be emitted, which will be executed by HIP runtime when
the fat binary is loaded/unloaded.

This feature is to facilitate implementation of device side
sanitizer which requires global vars with non-trival ctors.

By default this option is disabled.

Differential Revision: https://reviews.llvm.org/D69268
2019-10-22 16:06:20 -04:00
Michael Liao 243ebfba17 [hip][cuda] Fix the extended lambda name mangling issue.
Summary:
- HIP/CUDA host side needs to use device kernel symbol name to match the
  device side binaries. Without a consistent naming between host- and
  device-side compilations, it's risky that wrong device binaries are
  executed. Consistent naming is usually not an issue until unnamed
  types are used, especially the lambda. In this patch, the consistent
  name mangling is addressed for the extended lambdas, i.e. the lambdas
  annotated with `__device__`.
- In [Itanium C++ ABI][1], the mangling of the lambda is generally
  unspecified unless, in certain cases, ODR rule is required to ensure
  consisent naming cross TUs. The extended lambda is such a case as its
  name may be part of a device kernel function, e.g., the extended
  lambda is used as a template argument and etc. Thus, we need to force
  ODR for extended lambdas as they are referenced in both device- and
  host-side TUs. Furthermore, if a extended lambda is nested in other
  (extended or not) lambdas, those lambdas are required to follow ODR
  naming as well. This patch revises the current lambda mangle numbering
  to force ODR from an extended lambda to all its parent lambdas.
- On the other side, the aforementioned ODR naming should not change
  those lambdas' original linkages, i.e., we cannot replace the original
  `internal` with `linkonce_odr`; otherwise, we may violate ODR in
  general. This patch introduces a new field `HasKnownInternalLinkage`
  in lambda data to decouple the current linkage calculation based on
  mangling number assigned.

[1]: https://itanium-cxx-abi.github.io/cxx-abi/abi.html

Reviewers: tra, rsmith, yaxunl, martong, shafik

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68818

llvm-svn: 375309
2019-10-19 00:15:19 +00:00
Yaxun Liu 229c78d3a5 [CUDA][HIP] Fix host/device check with -fopenmp
CUDA/HIP program may be compiled with -fopenmp. In this case, -fopenmp is only passed to host compilation
to take advantages of multi-threads computation.

CUDA/HIP and OpenMP both use Sema::DeviceCallGraph to store functions to be analyzed and remove them
once they decide the function is sure to be emitted. CUDA/HIP and OpenMP have different functions to determine
if a function is sure to be emitted.

To check host/device correctly for CUDA/HIP when -fopenmp is enabled, there needs a unified logic to determine
whether a function is to be emitted. The logic needs to be aware of both CUDA and OpenMP logic.

Differential Revision: https://reviews.llvm.org/D67837

llvm-svn: 374263
2019-10-09 23:54:10 +00:00
Yaxun Liu 1282889347 [HIP] Support new kernel launching API
Differential Revision: https://reviews.llvm.org/D67947

llvm-svn: 372773
2019-09-24 19:16:40 +00:00
Yaxun Liu 1bea97c971 [AMDGPU] Set default flat work group size to (1,256) for HIP
Differential Revision: https://reviews.llvm.org/D67048

llvm-svn: 370808
2019-09-03 18:50:24 +00:00
Tim Northover a009a60a91 IR: print value numbers for unnamed function arguments
For consistency with normal instructions and clarity when reading IR,
it's best to print the %0, %1, ... names of function arguments in
definitions.

Also modifies the parser to accept IR in that form for obvious reasons.

llvm-svn: 367755
2019-08-03 14:28:34 +00:00
Vyacheslav Zakharin de811d1f51 [clang] Preserve names of addrspacecast'ed values.
Differential Revision: https://reviews.llvm.org/D63846

llvm-svn: 365666
2019-07-10 17:10:05 +00:00
Christudasan Devadasan 18ba9d6077 [AMDGPU] Increased the number of implicit argument bytes for both OpenCL and HIP (CLANG).
To enable a new implicit kernel argument,
increased the number of argument bytes from 48 to 56.

Reviewed By: yaxunl

Differential Revision: https://reviews.llvm.org/D63756

llvm-svn: 365643
2019-07-10 15:10:08 +00:00
Yaxun Liu c3dfe9082b [HIP] Support attribute hip_pinned_shadow
This patch introduces support of hip_pinned_shadow variable for HIP.

A hip_pinned_shadow variable is a global variable with attribute hip_pinned_shadow.
It has external linkage on device side and has no initializer. It has internal
linkage on host side and has initializer or static constructor. It can be accessed
in both device code and host code.

This allows HIP runtime to implement support of HIP texture reference.

Differential Revision: https://reviews.llvm.org/D62738

llvm-svn: 364381
2019-06-26 03:47:37 +00:00
Yaxun Liu cabce71845 [AMDGPU] Enable the implicit arguments for HIP (CLANG)
Enable 48-bytes of implicit arguments for HIP as well. Earlier it was enabled for OpenCL. This code is specific to AMDGPU target.

Differential Revision: https://reviews.llvm.org/D62244

llvm-svn: 363414
2019-06-14 15:54:47 +00:00
Tim Northover c46827c7ed LLVM IR: Generate new-style byval-with-Type from Clang
LLVM IR recently added a Type parameter to the byval Attribute, so that
when pointers become opaque and no longer have an element type the
information will still be present in IR.

For now the Type parameter is optional (which is why Clang didn't need
this change at the time), but it will become mandatory soon.

llvm-svn: 362652
2019-06-05 21:12:14 +00:00
Michael Liao 4b7a713acc [CUDA][HIP] Skip setting `externally_initialized` for static device variables.
Summary:
- By declaring device variables as `static`, we assume they won't be
  addressable from the host side. Thus, no `externally_initialized` is
  required.

Reviewers: yaxunl

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D62603

llvm-svn: 361994
2019-05-29 17:23:27 +00:00
Yaxun Liu dc805a4906 Fix failure of lit test dependent-libs.cu
llvm-svn: 361905
2019-05-29 01:34:44 +00:00
Yaxun Liu 02afe4e077 [CUDA][HIP] Emit dependent libs for host only
Recently D60274 was introduced to allow lld to handle dependent libs. However current
usage of dependent libs (e.g. pragma comment(lib, *) in windows header files) are intended
for host only. Emitting the metadata in device IR causes link error in device path.

Until there is a way to different it dependent libs for device or host, metadata for dependent
libs should be emitted for host only. This patch enforces that.

Differential Revision: https://reviews.llvm.org/D62483

llvm-svn: 361880
2019-05-28 21:18:59 +00:00
Michael Liao 3820506960 [HIP] Fix visibility of `__constant__` variables.
Summary:
- `__constant__` variables should not be `hidden` as the linker may turn
  them into `LOCAL` symbols.

Reviewers: yaxunl

Subscribers: jvesely, nhaehnle, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D61194

llvm-svn: 359344
2019-04-26 19:31:48 +00:00
Aaron Enye Shi 8129521318 [HIP-Clang] Fat binary should not be produced for non GPU code 2
Also for CUDA, we need to disable producing these fat binary functions when there is no GPU code.

Reviewers: yaxunl, tra

Differential Revision: https://reviews.llvm.org/D60141

llvm-svn: 357526
2019-04-02 20:49:41 +00:00
Aaron Enye Shi 13d8e92940 [HIP-Clang] Fat binary should not be produced for non GPU code
Skip producing the fat binary functions for HIP when no device code is present.

Reviewers: yaxunl

Differential Review: https://reviews.llvm.org/D60141

llvm-svn: 357520
2019-04-02 20:10:18 +00:00
Michael Liao 982cbb6232 [CUDA][HIP][DebugInfo] Skip reference device function
Summary:
- A device functions could be used as a non-type template parameter in a
  global/host function template. However, we should not try to retrieve that
  device function and reference it in the host-side debug info as it's
  only valid at device side.

Subscribers: aprantl, jdoerfert, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D58992

llvm-svn: 355551
2019-03-06 21:16:27 +00:00
Yaxun Liu e739ac0e25 [HIP] change kernel stub name
Add .stub to kernel stub function name so that it is different from kernel
name in device code. This is necessary to let debugger find correct symbol
for kernel.

Differential Revision: https://reviews.llvm.org/D58518

llvm-svn: 354948
2019-02-27 02:02:52 +00:00
Yaxun Liu 00ebc0cb92 revert r354615: [HIP] change kernel stub name
It caused regressions.

Differential Revision: https://reviews.llvm.org/D58518

llvm-svn: 354651
2019-02-22 04:20:12 +00:00
Yaxun Liu 8d7cf0e2d4 [HIP] change kernel stub name
Add .stub to kernel stub function name so that it is different from kernel
name in device code. This is necessary to let debugger find correct symbol
for kernel

Differential Revision: https://reviews.llvm.org/D58518

llvm-svn: 354615
2019-02-21 20:12:16 +00:00
Yaxun Liu c18e9ecd4f [CUDA][HIP] Use device side kernel and variable names when registering them
__hipRegisterFunction and __hipRegisterVar need to accept device side kernel and variable names
so that HIP runtime can associate kernel stub functions in host code with kernel symbols in fat binaries,
and associate shadow variables in host code with device variables in fat binaries.

Currently, clang assumes kernel functions and device variables have the same name as the kernel
stub functions and shadow variables. However, when host is compiled in windows with MSVC C++
ABI and device is compiled with Itanium C++ ABI (e.g. AMDGPU), kernels and device symbols in fat
binary are mangled differently than host.

This patch gets the device side kernel and variable name by mangling them in the mangle context
of aux target.

Differential Revision: https://reviews.llvm.org/D58163

llvm-svn: 354004
2019-02-14 02:00:09 +00:00
Alexey Bataev 1a9e05d7da [DEBUG_INFO][NVPTX] Generate correct data about variable address class.
Summary:
Added ability to generate correct debug info data about the variable
address class. Currently, for all the locals and globals the default
values are used, ADDR_local_space(6) for locals and ADDR_global_space(5)
for globals. The values are taken from the table in
  https://docs.nvidia.com/cuda/archive/10.0/ptx-writers-guide-to-interoperability/index.html#cuda-specific-dwarf.
  We need to emit correct data for address classes of, at least, shared
  and constant globals. Currently, all these variables are treated by
  the cuda-gdb debugger as the variables in the global address space
  and, thus, it require manual data type casting.

Reviewers: echristo, probinson

Subscribers: jholewinski, aprantl, cfe-commits

Differential Revision: https://reviews.llvm.org/D57162

llvm-svn: 353204
2019-02-05 19:45:57 +00:00
Yaxun Liu 277e064bf5 Do not copy long double and 128-bit fp format from aux target for AMDGPU
rC352620 caused regressions because it copied floating point format from
aux target.

floating point format decides whether extended long double is supported.
It is x86_fp80 on x86 but IEEE double on amdgcn.

Document usage of long doubel type in HIP programming guide 
https://github.com/ROCm-Developer-Tools/HIP/pull/890

Differential Revision: https://reviews.llvm.org/D57527

llvm-svn: 352801
2019-01-31 21:57:51 +00:00
Artem Belevich c62214da3d [CUDA] add support for the new kernel launch API in CUDA-9.2+.
Instead of calling CUDA runtime to arrange function arguments,
the new API constructs arguments in a local array and the kernels
are launched with __cudaLaunchKernel().

The old API has been deprecated and is expected to go away
in the next CUDA release.

Differential Revision: https://reviews.llvm.org/D57488

llvm-svn: 352799
2019-01-31 21:34:03 +00:00
Artem Belevich 9953577cb2 [CUDA] Treat extern global variable shadows same as regular extern vars.
This fixes compiler crash when we attempted to compile this code:

extern __device__ int data;
__device__ int data = 1;

Differential Revision: https://reviews.llvm.org/D56033

llvm-svn: 349981
2018-12-22 01:11:09 +00:00