Commit Graph

76 Commits

Author SHA1 Message Date
Christian Sigg c4bc416418 [LLVM] Add rcp.approx.ftz.f32 intrinsic
Split out from https://reviews.llvm.org/D126158.

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D126369
2022-05-25 21:05:20 +02:00
David Green 9727c77d58 [NFC] Rename Instrinsic to Intrinsic 2022-04-25 18:13:23 +01:00
Andrew Savonichev 230f326964 [NVPTX] shfl.sync is introduced in PTX 6.0
PTX ISA spec, s9.7.8.6. Data Movement and Conversion Instructions:
shfl.sync

PTX ISA Notes
Introduced in PTX ISA version 6.0.

Target ISA Notes
Requires sm_30 or higher.

Differential Revision: https://reviews.llvm.org/D123039
2022-04-14 17:07:51 +03:00
Andrew Savonichev 369adba043 [NVPTX] 64-bit atom.{and,or,xor,min,max} require sm_32 or higher
PTX ISA spec, s9.7.12.4. Parallel Synchronization and Communication
Instructions: atom

Target ISA Notes
64-bit atom.{and,or,xor,min,max} require sm_32 or higher.

Differential Revision: https://reviews.llvm.org/D123038
2022-04-14 17:07:51 +03:00
Jakub Chlanda dce6aa237a [NVPTX] Correctly set regs for neg, abs intrinsics
This patch fixes a bug introduced in D117887.

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D120991
2022-03-04 11:06:07 -08:00
Kristina Bessonova 57aaab3b17 [NVPTX] Fix nvvm.match.sync*.i64 intrinsics return type (i64 -> i32)
NVVM IR specification defines them with i32 return type:

  declare i32 @llvm.nvvm.match.any.sync.i64(i32 %membermask, i64 %value)
  declare {i32, i1} @llvm.nvvm.match.all.sync.i64(i32 %membermask, i64 %value)
  ...
  The i32 return value is a 32-bit mask where bit position in mask corresponds
  to thread’s laneid.

as well as PTX ISA:

  9.7.12.8. Parallel Synchronization and Communication Instructions: match.sync

  match.any.sync.type  d, a, membermask;
  match.all.sync.type  d[|p], a, membermask;
  ...
  Destination d is a 32-bit mask where bit position in mask corresponds
  to thread’s laneid.

Additionally, ptxas doesn't accept intructions, produced by NVPTX backend.
After this patch, it compiles with no issues.

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D120499
2022-03-01 12:26:16 +02:00
Nicolas Miller 69a8350c23 [NVPTX] Add ex2.approx.f16/f16x2 support
his patch adds builtins and intrinsics for the f16 and f16x2 variants of the ex2
instruction.

These two variants were added in PTX7.0, and are supported by sm_75 and above.

Note that this isn't wired with the exp2 llvm intrinsic because the ex2
instruction is only available in its approx variant.

Running ptxas on the assembly generated by the test f16-ex2.ll works as
expected.

Differential Revision: https://reviews.llvm.org/D119157
2022-02-23 13:56:53 -08:00
Jakub Chlanda be672934ff [NVPTX] Add more FMA intriniscs/builtins
This patch adds builtins/intrinsics for the following variants of FMA:

- f16, f16x2
  - rn
  - rn_ftz
  - rn_sat
  - rn_ftz_sat
  - rn_relu
  - rn_ftz_relu
- bf16, bf16x2
  - rn
  - rn_relu

ptxas (Cuda compilation tools, release 11.0, V11.0.194) is happy with the generated assembly.

Differential Revision: https://reviews.llvm.org/D118977
2022-02-23 13:56:53 -08:00
Jakub Chlanda e0dc4ac28f [NVPTX] Expose float tys min, max, abs, neg as builtins
Adds support for the following builtins:

- abs, neg:
- .bf16,
- .bf16x2
- min, max
- {.ftz}{.NaN}{.xorsign.abs}.f16
- {.ftz}{.NaN}{.xorsign.abs}.f16x2
- {.NaN}{.xorsign.abs}.bf16
- {.NaN}{.xorsign.abs}.bf16x2
- {.ftz}{.NaN}{.xorsign.abs}.f32

Differential Revision: https://reviews.llvm.org/D117887
2022-02-23 13:56:53 -08:00
Dmitry Vassiliev 6645bfa8f5 [NVPTX] Fix bug with int_nvvm_rotate_b64 when operand immediate
Need to subract from 64, not 32.

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D119639
2022-02-15 01:23:11 +03:00
Jack Kirk bef3eb8344 [Clang][NVPTX]Add NVPTX intrinsics and builtins for CUDA PTX cvt sm80 instructions
Adds NVPTX intrinsics and builtins for CUDA PTX cvt instructions for sm80
architectures and above. Requires ptx 7.0.

PTX ISA description of cvt instructions :
https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions-cvt

Signed-off-by: JackAKirk <jack.kirk@codeplay.com>

Differential Revision: https://reviews.llvm.org/D116673
2022-01-13 13:29:48 -08:00
Andrew Savonichev 00aa0aeb06 [NVPTX] Add imm variants for surface and texture instructions
Texture/sampler/surface operands can be either a register or an
immediate (an index of .texref, .samplerref or .surfref).

TableGen declarations for these instructions used to only have
Int64Regs operands, so this caused issues when machine verifier
is turned on:

    *** Bad machine code: Expected a register operand. ***
    - function:    bar
    - basic block: %bb.0  (0x55b144d99ab8)
    - instruction: %4:int32regs = SULD_1D_I32_TRAP 0, killed %2:int32regs
    - operand 1:   0

The solution is to duplicate these instructions for all possible
operand types (i16imm and Int64Regs). Since this would
essentially double the amount code in TableGen, the patch also
does some refactoring for the original instructions to keep
things manageable.

Differential Revision: https://reviews.llvm.org/D112232
2021-11-10 19:05:03 +03:00
Cullen Rhodes 18655140d6 [NVPTX] NFC: Remove unused imm type intrinsic arg
Identified in D109359.

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D109755
2021-09-15 08:56:51 +00:00
Steffen Larsen 1b4c85fc02 [NVPTX] Add NVPTX intrinsics for CUDA PTX 6.5 ldmatrix instructions
Adds NVPTX intrinsics for the CUDA PTX `ldmatrix.sync.aligned` instructions added in PTX 6.5.

PTX ISA description of `ldmatrix.sync.aligned`: https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-ldmatrix

Authored-by: Steffen Larsen <steffen.larsen@codeplay.com>

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D107046
2021-08-06 16:13:35 -07:00
Artem Belevich d774b4aa5e [NVPTX, CUDA] Add .and.popc variant of the b1 MMA instruction.
That should allow clang to compile mma.h from CUDA-11.3.

Differential Revision: https://reviews.llvm.org/D105384
2021-07-15 12:02:09 -07:00
Steffen Larsen 3644726a78 [Clang][NVPTX] Add NVPTX intrinsics and builtins for CUDA PTX 6.5 and 7.0 WMMA and MMA instructions
Adds NVPTX builtins and intrinsics for the CUDA PTX `wmma.load`, `wmma.store`, `wmma.mma`, and `mma` instructions added in PTX 6.5 and 7.0.

PTX ISA description of

  - `wmma.load`: https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-wmma-ld
  - `wmma.store`: https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-wmma-st
  - `wmma.mma`: https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-wmma-mma
  - `mma`: https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma

Overview of `wmma.mma` and `mma` matrix shape/type combinations added with specific PTX versions: https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-shape

Authored-by: Steffen Larsen <steffen.larsen@codeplay.com>
Co-Authored-by: Stuart Adams <stuart.adams@codeplay.com>

Reviewed By: tra

Differential Revision: https://reviews.llvm.org/D104847
2021-06-29 15:44:07 -07:00
Steffen Larsen f226e28a88 [Clang][NVPTX] Add NVPTX intrinsics and builtins for CUDA PTX redux.sync instructions
Adds NVPTX builtins and intrinsics for the CUDA PTX `redux.sync` instructions
for `sm_80` architecture or newer.

PTX ISA description of `redux.sync`:
https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#parallel-synchronization-and-communication-instructions-redux-sync

Authored-by: Steffen Larsen <steffen.larsen@codeplay.com>

Differential Revision: https://reviews.llvm.org/D100124
2021-05-17 09:46:59 -07:00
Stuart Adams 02c2468864 [Clang][NVPTX] Add NVPTX intrinsics and builtins for CUDA PTX cp.async instructions
Adds NVPTX builtins and intrinsics for the CUDA PTX `cp.async` instructions for
`sm_80` architecture or newer.

PTX ISA description of `cp.async`:
https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions-asynchronous-copy
https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#parallel-synchronization-and-communication-instructions-cp-async-mbarrier-arrive

Authored-by: Stuart Adams <stuart.adams@codeplay.com>
Co-Authored-by: Alexander Johnston <alexander@codeplay.com>

Differential Revision: https://reviews.llvm.org/D100394
2021-05-17 09:46:59 -07:00
Paul C. Anagnostopoulos f4c39ccd22 [TableGen] Continue cleaning up .td files
This pass includes LLVM and MLIR files.

Differential Revision: https://reviews.llvm.org/D93864
2021-01-01 10:21:02 -05:00
Paul C. Anagnostopoulos eed768b700 [NVPTX] [TableGen] Use new features of TableGen to simplify and clarify.
Differential Revision: https://reviews.llvm.org/D90861
2020-11-06 09:20:19 -05:00
Paul C. Anagnostopoulos d56cd4291e [TableGen] Add !interleave operator to concatenate a list of values with delimiters
Add a test. Use it in some TableGen files.

Differential Revision: https://reviews.llvm.org/D90469
2020-11-04 09:23:54 -05:00
Artem Belevich d9972f8482 [NVPTX] Added llvm.nvvm.mma.m8n8k4.* intrinsics
Differential Revision: https://reviews.llvm.org/D69324
2019-10-28 13:55:30 -07:00
Artem Belevich 5c6ab2a0b1 [NVPTX] Restructure shfl instrinsics and add variants that return a predicate.
Also, amend constraints for non-sync variants that are no longer
available on sm_70+ with PTX6.4+.

Differential Revision: https://reviews.llvm.org/D68892

llvm-svn: 374790
2019-10-14 16:53:34 +00:00
Benjamin Kramer fa1a4e4de5 [NVPTX] Use atomicrmw fadd instead of intrinsics
AutoUpgrade the old intrinsics to atomicrmw fadd.

llvm-svn: 365796
2019-07-11 17:11:25 +00:00
Artem Belevich 16737538f4 PTX 6.3 extends `wmma` instruction to support s8/u8/s4/u4/b1 -> s32.
All of the new instructions are still handled mostly by tablegen. I've slightly
refactored the code to drive intrinsic/instruction generation from a master
list of supported variants, so all irregularities have to be implemented in one place only.

The test generation script wmma.py has been refactored in a similar way.

Differential Revision: https://reviews.llvm.org/D60015

llvm-svn: 359247
2019-04-25 22:27:57 +00:00
Artem Belevich 8d825b38ed [NVPTX] generate correct MMA instruction mnemonics with PTX63+.
PTX 6.3 requires using ".aligned" in the MMA instruction names.
In order to generate correct name, now we pass current
PTX version to each instruction as an extra constant operand
and InstPrinter adjusts its output accordingly.

Differential Revision: https://reviews.llvm.org/D59393

llvm-svn: 359246
2019-04-25 22:27:46 +00:00
Artem Belevich 7ecd82ce19 [NVPTX] Refactor generation of MMA intrinsics and instructions. NFC.
Generalized constructions of 'fragments' of MMA operations to provide
common primitives for construction of the ops. This will make it easier
to add new variants of the instructions that operate on integer types.

Use nested foreach loops which makes it possible to better control
naming of the intrinsics.

This patch does not affect LLVM's output, so there are no test changes.

Differential Revision: https://reviews.llvm.org/D59389

llvm-svn: 359245
2019-04-25 22:27:35 +00:00
Chandler Carruth 2946cd7010 Update the file headers across all of the LLVM projects in the monorepo
to reflect the new license.

We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.

Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.

llvm-svn: 351636
2019-01-19 08:50:56 +00:00
Ulrich Weigand c48aefb63b [TableGen] Support multi-alternative pattern fragments
A TableGen instruction record usually contains a DAG pattern that will
describe the SelectionDAG operation that can be implemented by this
instruction. However, there will be cases where several different DAG
patterns can all be implemented by the same instruction. The way to
represent this today is to write additional patterns in the Pattern
(or usually Pat) class that map those extra DAG patterns to the
instruction. This usually also works fine.

However, I've noticed cases where the current setup seems to require
quite a bit of extra (and duplicated) text in the target .td files.
For example, in the SystemZ back-end, there are quite a number of
instructions that can implement an "add-with-overflow" operation.
The same instructions also need to be used to implement just plain
addition (simply ignoring the extra overflow output). The current
solution requires creating extra Pat pattern for every instruction,
duplicating the information about which particular add operands
map best to which particular instruction.

This patch enhances TableGen to support a new PatFrags class, which
can be used to encapsulate multiple alternative patterns that may
all match to the same instruction.  It operates the same way as the
existing PatFrag class, except that it accepts a list of DAG patterns
to match instead of just a single one.  As an example, we can now define
a PatFrags to match either an "add-with-overflow" or a regular add
operation:

  def z_sadd : PatFrags<(ops node:$src1, node:$src2),
                        [(z_saddo node:$src1, node:$src2),
                         (add node:$src1, node:$src2)]>;

and then use this in the add instruction pattern:

  defm AR : BinaryRRAndK<"ar", 0x1A, 0xB9F8, z_sadd, GR32, GR32>;

These SystemZ target changes are implemented here as well.


Note that PatFrag is now defined as a subclass of PatFrags, which
means that some users of internals of PatFrag need to be updated.
(E.g. instead of using PatFrag.Fragment you now need to use
!head(PatFrag.Fragments).)


The implementation is based on the following main ideas:
- InlinePatternFragments may now replace each original pattern
  with several result patterns, not just one.
- parseInstructionPattern delays calling InlinePatternFragments
  and InferAllTypes.  Instead, it extracts a single DAG match
  pattern from the main instruction pattern.
- Processing of the DAG match pattern part of the main instruction
  pattern now shares most code with processing match patterns from
  the Pattern class.
- Direct use of main instruction patterns in InferFromPattern and
  EmitResultInstructionAsOperand is removed; everything now operates
  solely on DAG match patterns.


Reviewed by: hfinkel

Differential Revision: https://reviews.llvm.org/D48545

llvm-svn: 336999
2018-07-13 13:18:00 +00:00
Artem Belevich 2f348ea1c7 [NVPTX] Added a feature to use short pointers for const/local/shared AS.
Const/local/shared address spaces are all < 4GB and we can always use
32-bit pointers to access them. This has substantial performance impact
on kernels that uses shared memory for intermediary results.

The feature is disabled by default.

Differential Revision: https://reviews.llvm.org/D46147

llvm-svn: 331941
2018-05-09 23:46:19 +00:00
Artem Belevich 0ae8590354 [NVPTX, CUDA] Added support for m8n32k16 and m32n8k16 variants of wmma instructions.
The new instructions were added added for sm_70+ GPUs in CUDA-9.1.

Differential Revision: https://reviews.llvm.org/D45068

llvm-svn: 330296
2018-04-18 21:51:48 +00:00
Artem Belevich 30512869ff [NVPTX] Make tensor shape part of WMMA intrinsic's name.
This is needed for the upcoming implementation of the
new 8x32x16 and 32x8x16 variants of WMMA instructions
introduced in CUDA 9.1.

Differential Revision: https://reviews.llvm.org/D44719

llvm-svn: 328158
2018-03-21 21:55:02 +00:00
Artem Belevich 914d4babec [NVPTX] Make tensor load/store intrinsics overloaded.
This way we can support address-space specific variants without explicitly
encoding the space in the name of the intrinsic. Less intrinsics to deal with ->
less boilerplate.

Added a bit of tablegen magic to match/replace an intrinsics with a pointer
argument in particular address space with the space-specific instruction
variant.

Updated tests to use non-default address spaces.

Differential Revision: https://reviews.llvm.org/D43268

llvm-svn: 328006
2018-03-20 17:18:59 +00:00
Artem Belevich 7b14e7f041 [NVPTX] TblGen-ized lowering of WMMA intrinsics.
NFC.

Differential Revision: https://reviews.llvm.org/D43151

llvm-svn: 327672
2018-03-15 21:40:56 +00:00
Artem Belevich 8c9749b1dc [NVPTX] use pattern matching to lower int_nvvm_match_all_sync*.
Now that patterns can handle intrinsics returning multiple results,
use tablegen'ed pattern matching instead of custom lowering.

Differential Revision: https://reviews.llvm.org/D43890

llvm-svn: 326457
2018-03-01 18:28:45 +00:00
Artem Belevich 18a7c51520 [NVPTX] Removed always-true predicates in NVPTX.
NVPTX stopped supporting GPUs older than sm_20 (Fermi) quite a while back.
Removal of support of pre-Fermi GPUs made a lot of predicates in the NVPTX
backend pointless as they can't ever be false any more.
It's time to retire them. NFC intended.

Differential Revision: https://reviews.llvm.org/D43843

llvm-svn: 326349
2018-02-28 18:51:22 +00:00
Artem Belevich a659d2590e [NVPTX,CUDA] Added llvm.nvvm.fns intrinsic and matching __nvvm_fns builtin in clang.
Differential Revision: https://reviews.llvm.org/D40872

llvm-svn: 319909
2017-12-06 17:50:05 +00:00
Justin Lebar da9e0bd3a2 [NVPTX] Implement __nvvm_atom_add_gen_d builtin.
Summary:
This just seems to have been an oversight.  We already supported the f64
atomic add with an explicit scope (e.g. "cta"), but not the scopeless
version.

Reviewers: tra

Subscribers: jholewinski, sanjoy, cfe-commits, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D39638

llvm-svn: 317623
2017-11-07 22:10:54 +00:00
Artem Belevich 3bafc2f0d9 [NVPTX] Implemented wmma intrinsics and instructions.
WMMA = "Warp Level Matrix Multiply-Accumulate".
These are the new instructions introduced in PTX6.0 and available
on sm_70 GPUs.

Differential Revision: https://reviews.llvm.org/D38645

llvm-svn: 315601
2017-10-12 18:27:55 +00:00
Artem Belevich bab95c7087 [NVPTX] added match.{any,all}.sync instructions, intrinsics & builtins.
Differential Revision: https://reviews.llvm.org/D38191

llvm-svn: 314223
2017-09-26 17:07:23 +00:00
Justin Lebar d31d5e6aa2 Revert "[NVPTX] added match.{any,all}.sync instructions, intrinsics & builtins.", rL314135.
Causing assertion failures on macos:

> Assertion failed: (Num < NumOperands && "Invalid child # of SDNode!"),
> function getOperand, file
> /Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm/include/llvm/CodeGen/SelectionDAGNodes.h,
> line 835.

http://green.lab.llvm.org/green/job/clang-stage1-cmake-RA-incremental/42739/testReport/LLVM/CodeGen_NVPTX/surf_read_cuda_ll/

llvm-svn: 314142
2017-09-25 19:41:56 +00:00
Artem Belevich 9941ee9529 [NVPTX] added match.{any,all}.sync instructions, intrinsics & builtins.
Differential Revision: https://reviews.llvm.org/D38191

llvm-svn: 314135
2017-09-25 18:53:57 +00:00
Artem Belevich 42960b4188 [NVPTX] Implemented bar.warp.sync, barrier.sync, and vote{.sync} instructions/intrinsics/builtins.
Differential Revision: https://reviews.llvm.org/D38148

llvm-svn: 313898
2017-09-21 18:44:49 +00:00
Artem Belevich 4654dc89be [NVPTX] Implemented shfl.sync instruction and supporting intrinsics/builtins.
Differential Revision: https://reviews.llvm.org/D38090

llvm-svn: 313820
2017-09-20 21:23:07 +00:00
Artem Belevich 5920babc4f [NVPTX] Added missing LDU/LDG intrinsics for f16.
Differential Revision: https://reviews.llvm.org/D30512

llvm-svn: 296784
2017-03-02 19:14:10 +00:00
Artem Belevich 620db1f3dd [NVPTX] Added support for .f16x2 instructions.
This patch enables support for .f16x2 operations.

Added new register type Float16x2.
Added support for .f16x2 instructions.
Added handling of vectorized loads/stores of v2f16 values.

Differential Revision: https://reviews.llvm.org/D30057
Differential Revision: https://reviews.llvm.org/D30310

llvm-svn: 296032
2017-02-23 22:38:24 +00:00
Arpith Chacko Jacob 2b156edf56 [NVPTX] Add intrinsics to support named barriers.
Support for barrier synchronization between a subset of threads
in a CTA through one of sixteen explicitly specified barriers.
These intrinsics are not directly exposed in CUDA but are
critical for forthcoming support of OpenMP on NVPTX GPUs.

The intrinsics allow the synchronization of an arbitrary
(multiple of 32) number of threads in a CTA at one of 16
distinct barriers. The two intrinsics added are as follows:

call void @llvm.nvvm.barrier.n(i32 10)
waits for all threads in a CTA to arrive at named barrier #10.

call void @llvm.nvvm.barrier(i32 15, i32 992)
waits for 992 threads in a CTA to arrive at barrier #15.

Detailed description of these intrinsics are available in the PTX manual.
http://docs.nvidia.com/cuda/parallel-thread-execution/#parallel-synchronization-and-communication-instructions

Reviewers: hfinkel, jlebar
Differential Revision: https://reviews.llvm.org/D17657

llvm-svn: 293384
2017-01-28 16:38:15 +00:00
Justin Lebar 46624a822d [NVPTX] Auto-upgrade some NVPTX intrinsics to LLVM target-generic code.
Summary:
Specifically, we upgrade llvm.nvvm.:

 * brev{32,64}
 * clz.{i,ll}
 * popc.{i,ll}
 * abs.{i,ll}
 * {min,max}.{i,ll,u,ull}
 * h2f

These either map directly to an existing LLVM target-generic
intrinsic or map to a simple LLVM target-generic idiom.

In all cases, we check that the code we generate is lowered to PTX as we
expect.

These builtins don't need to be backfilled in clang: They're not
accessible to user code from nvcc.

Reviewers: tra

Subscribers: majnemer, cfe-commits, llvm-commits, jholewinski

Differential Revision: https://reviews.llvm.org/D28793

llvm-svn: 292694
2017-01-21 01:00:32 +00:00
Justin Lebar 9c46450dbb [NVPTX] Standardize asm printer on "foo \tbar".
Some instructions were printed as "foo\tbar", but most are printed as
"foo \bar".  Standardize on the latter form.

llvm-svn: 292306
2017-01-18 00:09:36 +00:00
Justin Lebar 2a2d6f0ddd [NVPTX] Clean up nested !strconcat calls.
!strconcat is a variadic function; it will concatenate an arbitrary
number of strings.  There's no need to nest it.

llvm-svn: 292305
2017-01-18 00:09:19 +00:00