Commit Graph

12 Commits

Author SHA1 Message Date
Fangrui Song fd739804e0 [test] Add {{.*}} to make ELF tests immune to dso_local/dso_preemptable/(none) differences
For a default visibility external linkage definition, dso_local is set for ELF
-fno-pic/-fpie and COFF and Mach-O. Since default clang -cc1 for ELF is similar
to -fpic ("PIC Level" is not set), this nuance causes unneeded binary format differences.

To make emitted IR similar, ELF -cc1 -fpic will default to -fno-semantic-interposition,
which sets dso_local for default visibility external linkage definitions.

To make this flip smooth and enable future (dso_local as definition default),
this patch replaces (function) `define ` with `define{{.*}} `,
(variable/constant/alias) `= ` with `={{.*}} `, or inserts appropriate `{{.*}} `.
2020-12-31 00:27:11 -08:00
Michael Liao 8c36eaf037 [clang][opencl][codegen] Remove the insertion of `correctly-rounded-divide-sqrt-fp-math` fn-attr.
- `-cl-fp32-correctly-rounded-divide-sqrt` is already handled in a
  per-instruction manner by annotating the accuracy required. There's no
  need to add that fn-attr. So far, there's no in-tree backend handling
  that attr and that OpenCL specific option.
- In case that out-of-tree backends are broken, this change could be
  reverted if those backends could not be fixed.

Differential Revision: https://reviews.llvm.org/D88424
2020-10-01 11:07:39 -04:00
Yaxun (Sam) Liu fb44b9db95 [OpenCL][CUDA][HIP][SYCL] Add norecurse
norecurse function attr indicates the function is not called recursively
directly or indirectly.

Add norecurse to OpenCL functions, SYCL functions in device compilation
and CUDA/HIP kernels.

Although there is LLVM pass adding norecurse to functions, it only works
for whole-program compilation. Also FE adding norecurse can make that
pass run faster since functions with norecurse do not need to be checked
again.

Differential Revision: https://reviews.llvm.org/D73651
2020-02-16 20:41:00 -05:00
Matt Arsenault eac783a900 AMDGPU: Always emit amdgpu-flat-work-group-size
The backend default maximum should be the hardware maximum, so the
frontend should set the implementation defined default maximum.

llvm-svn: 370101
2019-08-27 19:25:40 +00:00
Christudasan Devadasan 18ba9d6077 [AMDGPU] Increased the number of implicit argument bytes for both OpenCL and HIP (CLANG).
To enable a new implicit kernel argument,
increased the number of argument bytes from 48 to 56.

Reviewed By: yaxunl

Differential Revision: https://reviews.llvm.org/D63756

llvm-svn: 365643
2019-07-10 15:10:08 +00:00
Tony Tye 68e11a6eca [AMDGPU] Update OpenCL to use 48 bytes of implicit arguments for AMDGPU (CLANG)
Add two additional implicit arguments for OpenCL for the AMDGPU target using the AMDHSA runtime to support device enqueue.

Differential Revision: https://reviews.llvm.org/D44696

llvm-svn: 328350
2018-03-23 18:51:45 +00:00
Tony Tye 1a3f3a2d14 [AMDGPU] Remove use of OpenCL triple environment and replace with function attribute for AMDGPU (CLANG)
- Remove use of the opencl and amdopencl environment member of the target triple for the AMDGPU target.
- Use a function attribute to communicate to the AMDGPU backend.

Differential Revision: https://reviews.llvm.org/D43735

llvm-svn: 328347
2018-03-23 18:43:15 +00:00
Matt Arsenault a1cf61b6fc OpenCL: Assume functions are convergent
This was done for CUDA functions in r261779, and for the same
reason this also needs to be done for OpenCL. An arbitrary
function could have a barrier() call in it, which in turn
requires the calling function to be convergent.

llvm-svn: 315094
2017-10-06 19:34:40 +00:00
Mehdi Amini 6aa9e9b41a IRGen: Add optnone attribute on function during O0
Amongst other, this will help LTO to correctly handle/honor files
compiled with O0, helping debugging failures.
It also seems in line with how we handle other options, like how
-fnoinline adds the appropriate attribute as well.

Differential Revision: https://reviews.llvm.org/D28404

llvm-svn: 304127
2017-05-29 05:38:20 +00:00
Stanislav Mekhanoshin 921a42314b [AMDGPU] Translate reqd_work_group_size into amdgpu_flat_work_group_size
These two attributes specify the same info in a different way.
AMGPU BE only checks the latter as a target specific attribute
as opposed to language specific reqd_work_group_size.

This change produces amdgpu_flat_work_group_size out of
reqd_work_group_size if specified.

Differential Revision: https://reviews.llvm.org/D31728

llvm-svn: 299678
2017-04-06 18:15:44 +00:00
Chandler Carruth fcd33149b4 Cleanup the handling of noinline function attributes, -fno-inline,
-fno-inline-functions, -O0, and optnone.

These were really, really tangled together:
- We used the noinline LLVM attribute for -fno-inline
  - But not for -fno-inline-functions (breaking LTO)
  - But we did use it for -finline-hint-functions (yay, LTO is happy!)
  - But we didn't for -O0 (LTO is sad yet again...)
- We had weird structuring of CodeGenOpts with both an inlining
  enumeration and a boolean. They interacted in weird ways and
  needlessly.
- A *lot* of set smashing went on with setting these, and then got worse
  when we considered optnone and other inlining-effecting attributes.
- A bunch of inline affecting attributes were managed in a completely
  different place from -fno-inline.
- Even with -fno-inline we failed to put the LLVM noinline attribute
  onto many generated function definitions because they didn't show up
  as AST-level functions.
- If you passed -O0 but -finline-functions we would run the normal
  inliner pass in LLVM despite it being in the O0 pipeline, which really
  doesn't make much sense.
- Lastly, we used things like '-fno-inline' to manipulate the pass
  pipeline which forced the pass pipeline to be much more
  parameterizable than it really needs to be. Instead we can *just* use
  the optimization level to select a pipeline and control the rest via
  attributes.

Sadly, this causes a bunch of churn in tests because we don't run the
optimizer in the tests and check the contents of attribute sets. It
would be awesome if attribute sets were a bit more FileCheck friendly,
but oh well.

I think this is a significant improvement and should remove the semantic
need to change what inliner pass we run in order to comply with the
requested inlining semantics by relying completely on attributes. It
also cleans up tho optnone and related handling a bit.

One unfortunate aspect of this is that for generating alwaysinline
routines like those in OpenMP we end up removing noinline and then
adding alwaysinline. I tried a bunch of other approaches, but because we
recompute function attributes from scratch and don't have a declaration
here I couldn't find anything substantially cleaner than this.

Differential Revision: https://reviews.llvm.org/D28053

llvm-svn: 290398
2016-12-23 01:24:49 +00:00
Konstantin Zhuravlyov 5b48d725a0 [AMDGPU] Expose flat work group size, register and wave control attributes
__attribute__((amdgpu_flat_work_group_size(<min>, <max>))) - request minimum and maximum flat work group size
__attribute__((amdgpu_waves_per_eu(<min>[, <max>]))) - request minimum and/or maximum waves per execution unit

Differential Revision: https://reviews.llvm.org/D24513

llvm-svn: 282371
2016-09-26 01:02:57 +00:00