Commit Graph

24 Commits

Author SHA1 Message Date
Justin Lebar 9d4ed262ef [CUDA] Harmonize asserts in SemaCUDA, NFC.
llvm-svn: 282987
2016-09-30 23:57:38 +00:00
Justin Lebar 7a406e78b3 [CUDA] Remove incorrect comment in CUDASetLambdaAttrs.
I'd said that nvcc doesn't allow you to add __host__ or __device__
attributes on lambdas in all circumstances, but I believe this was user
error on my part.  I can't reproduce these warnings/errors if I pass
--expt-extended-lambda to nvcc.

llvm-svn: 282912
2016-09-30 19:55:59 +00:00
Justin Lebar 7ca116cacf [CUDA] Make lambdas inherit __host__ and __device__ attributes from the scope in which they're created.
Summary: NVCC compat.  Fixes bug 30567.

Reviewers: tra

Subscribers: cfe-commits, rnk

Differential Revision: https://reviews.llvm.org/D25105

llvm-svn: 282880
2016-09-30 17:14:53 +00:00
Justin Lebar b17840de33 [CUDA] Disallow variable-length arrays in CUDA device code.
Reviewers: tra

Subscribers: cfe-commits, jhen

Differential Revision: https://reviews.llvm.org/D25050

llvm-svn: 282647
2016-09-28 22:45:58 +00:00
Justin Lebar 2a8db34044 [CUDA] Disallow exceptions in device code.
Reviewers: tra

Subscribers: cfe-commits, jhen

Differential Revision: https://reviews.llvm.org/D25036

llvm-svn: 282646
2016-09-28 22:45:54 +00:00
Justin Lebar 26bb31123a [CUDA] Fix "declared here" note on deferred wrong-side errors.
Previously we weren't deferring these "declared here" notes, which is
obviously wrong.

llvm-svn: 278767
2016-08-16 00:48:21 +00:00
Justin Lebar 18e2d82297 [CUDA] Raise an error if a wrong-side call is codegen'ed.
Summary:
Some function calls in CUDA are allowed to appear in
semantically-correct programs but are an error if they're ever
codegen'ed.  Specifically, a host+device function may call a host
function, but it's an error if such a function is ever codegen'ed in
device mode (and vice versa).

Previously, clang made no attempt to catch these errors.  For the most
part, they would be caught by ptxas, and reported as "call to unknown
function 'foo'".

Now we catch these errors and report them the same as we report other
illegal calls (e.g. a call from a host function to a device function).

This has a small change in error-message behavior for calls that were
previously disallowed (e.g. calls from a host to a device function).
Previously, we'd catch disallowed calls fairly early, before doing
additional semantic checking e.g. of the call's arguments.  Now we catch
these illegal calls at the very end of our semantic checks, so we'll
only emit a "illegal CUDA call" error if the call is otherwise
well-formed.

Reviewers: tra, rnk

Subscribers: cfe-commits

Differential Revision: https://reviews.llvm.org/D23242

llvm-svn: 278759
2016-08-15 23:00:49 +00:00
Justin Lebar e71c08f84c [CUDA] Use the multi-element remove function in EraseUnwantedCUDAMatches.
Summary:
Bug pointed out by Benjamin Kramer in r264008.  I think the bug is
benign because by the time this is called, we should only have at most
two overloads to consider (either a host and a device overload, or a
host+device overload, but not all three).

Reviewers: tra

Subscribers: cfe-commits, bkramer

Differential Revision: http://reviews.llvm.org/D21914

llvm-svn: 275233
2016-07-12 23:23:13 +00:00
Artem Belevich 3650bbeebc [CUDA] Do not allow non-empty destructors for global device-side variables.
According to Cuda Programming guide (v7.5, E2.3.1):
> __device__, __constant__ and __shared__ variables defined in namespace
> scope, that are of class type, cannot have a non-empty constructor or a
> non-empty destructor.

Clang already deals with device-side constructors (see D15305).
This patch enforces similar rules for destructors.

Differential Revision: http://reviews.llvm.org/D20140

llvm-svn: 270108
2016-05-19 20:13:53 +00:00
Justin Lebar ba122ab42f [CUDA] Make unattributed constexpr functions implicitly host+device.
With this patch, by a constexpr function is implicitly host+device
unless:

 a) it's a variadic function (variadic functions are not allowed on the
    device side), or
 b) it's preceeded by a __device__ overload in a system header.

The restriction on overloading __host__ __device__ functions on the
basis of their CUDA attributes remains in place, but we use (b) to allow
us to define __device__ overloads for constexpr functions in cmath,
which would otherwise be __host__ __device__ and thus not overloadable.

You can disable this behavior with -fno-cuda-host-device-constexpr.

Reviewers: tra, rnk, rsmith

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D18380

llvm-svn: 264964
2016-03-30 23:30:21 +00:00
Justin Lebar 3918647a11 [CUDA] Fix order of overloading preferences in comment.
llvm-svn: 264741
2016-03-29 16:24:22 +00:00
Justin Lebar 25c4a81e79 [CUDA] Remove three obsolete CUDA cc1 flags.
Summary:
* -fcuda-target-overloads

  Previously unconditionally set to true by the driver.  Necessary for
  correct functioning of the compiler -- our CUDA headers wrapper won't
  compile without this.

* -fcuda-disable-target-call-checks

  Previously unconditionally set to true by the driver.  Necessary to
  compile almost any external CUDA code -- almost all libraries assume
  that host+device code can call host or device functions.

* -fcuda-allow-host-calls-from-host-device

  No effect when target overloading is enabled.

Reviewers: tra

Subscribers: rsmith, cfe-commits

Differential Revision: http://reviews.llvm.org/D18416

llvm-svn: 264739
2016-03-29 16:24:16 +00:00
Justin Lebar e6a2cc12a3 [sema] [CUDA] Use std algorithms in EraseUnwantedCUDAMatchesImpl.
Summary: NFC

Reviewers: tra

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D18327

llvm-svn: 264008
2016-03-22 00:09:25 +00:00
Artem Belevich 186091094a [CUDA] Tweak attribute-based overload resolution to match nvcc behavior.
This is an artefact of split-mode CUDA compilation that we need to
mimic. HD functions are sometimes allowed to call H or D functions. Due
to split compilation mode device-side compilation will not see host-only
function and thus they will not be considered at all. For clang both H
and D variants will become function overloads visible to
compiler. Normally target attribute is considered only if C++ rules can
not determine which function is better. However in this case we need to
ignore functions that would not be present during current compilation
phase before we apply normal overload resolution rules.

Changes:
* introduced another level of call preference to better describe
  possible call combinations.
* removed WrongSide functions from consideration if the set contains
  SameSide function.
* disabled H->D, D->H and G->H calls. These combinations are
  not allowed by CUDA and we were reluctantly allowing them to work
  around device-side calls to math functions in std namespace.
  We no longer need it after r258880.

Differential Revision: http://reviews.llvm.org/D16870

llvm-svn: 260697
2016-02-12 18:29:18 +00:00
Artem Belevich 97c01c35f8 [CUDA] Do not allow dynamic initialization of global device side variables.
In general CUDA does not allow dynamic initialization of
global device-side variables. One exception is that CUDA allows
records with empty constructors as described in section E2.2.1 of
CUDA 7.5 Programming guide.

This patch applies initializer checks for all device-side variables.
Empty constructors are accepted, but no code is generated for them.

Differential Revision: http://reviews.llvm.org/D15305

llvm-svn: 259592
2016-02-02 22:29:48 +00:00
Justin Lebar c66a10652a [CUDA] Only allow __global__ on free functions and static member functions.
Summary:
Warn for NVCC compatibility if you declare a static member function or
inline function as __global__.

Reviewers: tra

Subscribers: jhen, echristo, cfe-commits

Differential Revision: http://reviews.llvm.org/D16261

llvm-svn: 258263
2016-01-20 00:26:57 +00:00
Artem Belevich 94a55e8169 [CUDA] Allow function overloads in CUDA based on host/device attributes.
The patch makes it possible to parse CUDA files that contain host/device
functions with identical signatures, but different attributes without
having to physically split source into host-only and device-only parts.

This change is needed in order to parse CUDA header files that have
a lot of name clashes with standard include files.

Gory details are in design doc here: https://goo.gl/EXnymm
Feel free to leave comments there or in this review thread.

This feature is controlled with CC1 option -fcuda-target-overloads
and is disabled by default.

Differential Revision: http://reviews.llvm.org/D12453

llvm-svn: 248295
2015-09-22 17:22:59 +00:00
Eli Bendersky 4bdc50eccb Create a frontend flag to disable CUDA cross-target call checks
For CUDA source, Sema checks that the targets of call expressions make sense
(e.g. a host function can't call a device function).

Adding a flag that lets us skip this check. Motivation: for source-to-source
translation tools that have to accept code that's not strictly kosher CUDA but
is still accepted by nvcc. The source-to-source translation tool can then fix
the code and leave calls that are semantically valid for the actual compilation
stage.

Differential Revision: http://reviews.llvm.org/D9036

llvm-svn: 235049
2015-04-15 22:27:06 +00:00
Jacques Pienaar a50178c23e CUDA: Add option to allow host device functions to call host functions
Commiting code from review http://reviews.llvm.org/D7841

llvm-svn: 230385
2015-02-24 21:45:33 +00:00
Jacques Pienaar 5bdd67778f Consider calls from implict host device functions as valid in SemaCUDA.
In SemaCUDA all implicit functions were considered host device, this led to
errors such as the following code snippet failing to compile:

struct Copyable {
  const Copyable& operator=(const Copyable& x) { return *this; }
};

struct Simple {
  Copyable b;
};

void foo() {
  Simple a, b;

  a = b;
}

Above the implicit copy assignment operator was inferred as host device but
there was only a host assignment copy defined which is an error in device
compilation mode.

Differential Revision: http://reviews.llvm.org/D6565

llvm-svn: 224358
2014-12-16 20:12:38 +00:00
Reid Kleckner bbc0178518 CUDA host device code with two code paths
Summary:
Allow CUDA host device functions with two code paths using __CUDA_ARCH__
to differentiate between code path being compiled.

For example:
  __host__ __device__ void host_device_function(void) {
  #ifdef __CUDA_ARCH__
    device_only_function();
  #else
    host_only_function();
  #endif
  }

Patch by Jacques Pienaar.

Reviewed By: rnk

Differential Revision: http://reviews.llvm.org/D6457

llvm-svn: 223271
2014-12-03 21:53:36 +00:00
Eli Bendersky f2787a0dc0 CUDA: mark the target of implicit intrinsics properly
r218624 implemented target inference for implicit special members. However,
other entities can be implicit - for example intrinsics. These can not have
inference running on them, so they should be marked host device as before. This
is the safest and most flexible setting, since by construction these functions
don't invoke anything, and we'd like them to be invokable from both host and
device code. LLVM's intrinsics definitions (where these intrinsics come from in
the case of CUDA/NVPTX) have no notion of target, so both host and device
intrinsics can be supported this way.

llvm-svn: 218688
2014-09-30 17:38:34 +00:00
Eli Bendersky 9a220fca4a CUDA: Fix incorrect target inference for implicit members.
As PR20495 demonstrates, Clang currenlty infers the CUDA target (host/device,
etc) for implicit members (constructors, etc.) incorrectly. This causes errors
and even assertions in Clang when compiling code (assertions in C++11 mode where
implicit move constructors are added into the mix).

Fix the problem by inferring the target from the methods the implicit member
should call (depending on its base classes and fields).

llvm-svn: 218624
2014-09-29 20:38:29 +00:00
Eli Bendersky 7325e56c10 Split off CUDA-specific Sema parts to a new file
In line with SemaOpenMP.cpp, etc. CUDA-specific semantic analysis code goes into
a separate file. This is in anticipation of adding extra functionality here in
the near future.

No change in functionality.

Review: http://reviews.llvm.org/D5160
llvm-svn: 217043
2014-09-03 15:27:03 +00:00