2015-09-24 05:49:39 +08:00
|
|
|
// REQUIRES: clang-driver
|
|
|
|
// REQUIRES: x86-registered-target
|
2015-11-18 06:28:46 +08:00
|
|
|
// REQUIRES: nvptx-registered-target
|
2015-09-24 05:49:39 +08:00
|
|
|
//
|
2015-11-18 06:28:46 +08:00
|
|
|
// # Check that we properly detect CUDA installation.
|
2015-09-24 05:49:39 +08:00
|
|
|
// RUN: %clang -v --target=i386-unknown-linux \
|
2015-11-18 06:28:46 +08:00
|
|
|
// RUN: --sysroot=%S/no-cuda-there 2>&1 | FileCheck %s -check-prefix NOCUDA
|
[CUDA] Driver changes to support CUDA compilation on MacOS.
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
2016-11-18 08:41:22 +08:00
|
|
|
// RUN: %clang -v --target=i386-apple-macosx \
|
|
|
|
// RUN: --sysroot=%S/no-cuda-there 2>&1 | FileCheck %s -check-prefix NOCUDA
|
|
|
|
|
2015-11-18 06:28:46 +08:00
|
|
|
// RUN: %clang -v --target=i386-unknown-linux \
|
|
|
|
// RUN: --sysroot=%S/Inputs/CUDA 2>&1 | FileCheck %s
|
[CUDA] Driver changes to support CUDA compilation on MacOS.
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
2016-11-18 08:41:22 +08:00
|
|
|
// RUN: %clang -v --target=i386-apple-macosx \
|
|
|
|
// RUN: --sysroot=%S/Inputs/CUDA 2>&1 | FileCheck %s
|
|
|
|
|
2015-09-24 05:49:39 +08:00
|
|
|
// RUN: %clang -v --target=i386-unknown-linux \
|
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA/usr/local/cuda 2>&1 | FileCheck %s
|
[CUDA] Driver changes to support CUDA compilation on MacOS.
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
2016-11-18 08:41:22 +08:00
|
|
|
// RUN: %clang -v --target=i386-apple-macosx \
|
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA/usr/local/cuda 2>&1 | FileCheck %s
|
2015-09-24 05:49:39 +08:00
|
|
|
|
2016-08-03 07:12:51 +08:00
|
|
|
// Make sure we map libdevice bitcode files to proper GPUs. These
|
|
|
|
// tests use Inputs/CUDA_80 which has full set of libdevice files.
|
|
|
|
// However, libdevice mapping only matches CUDA-7.x at the moment.
|
|
|
|
// sm_2x, sm_32 -> compute_20
|
2015-11-18 06:28:50 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_21 \
|
2016-08-03 07:12:51 +08:00
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA_80/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE20
|
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_32 \
|
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA_80/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE20
|
2016-09-29 01:47:40 +08:00
|
|
|
// sm_30, sm_6x map to compute_30.
|
2016-08-03 07:12:51 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_30 \
|
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA_80/usr/local/cuda %s 2>&1 \
|
2015-11-18 06:28:50 +08:00
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON \
|
2016-08-03 07:12:51 +08:00
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE30
|
2016-09-29 01:47:40 +08:00
|
|
|
// sm_5x is a special case. Maps to compute_30 for cuda-7.x only.
|
2016-08-03 07:12:51 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_50 \
|
2016-09-29 01:47:40 +08:00
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA/usr/local/cuda %s 2>&1 \
|
2016-08-03 07:12:51 +08:00
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE30
|
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_60 \
|
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA_80/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE30
|
|
|
|
// sm_35 and sm_37 -> compute_35
|
2015-11-18 06:28:46 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_35 \
|
2016-08-03 07:12:51 +08:00
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA_80/usr/local/cuda %s 2>&1 \
|
2015-11-18 06:28:50 +08:00
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix CUDAINC \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE35
|
2016-08-03 07:12:51 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_37 \
|
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA_80/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix CUDAINC \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE35
|
2016-09-29 01:47:40 +08:00
|
|
|
// sm_5x -> compute_50 for CUDA-8.0 and newer.
|
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_50 \
|
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA_80/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE50
|
|
|
|
|
2015-11-18 06:28:50 +08:00
|
|
|
// Verify that -nocudainc prevents adding include path to CUDA headers.
|
2015-11-18 06:28:46 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_35 \
|
|
|
|
// RUN: -nocudainc --cuda-path=%S/Inputs/CUDA/usr/local/cuda %s 2>&1 \
|
2015-11-18 06:28:50 +08:00
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix NOCUDAINC \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE35
|
[CUDA] Driver changes to support CUDA compilation on MacOS.
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
2016-11-18 08:41:22 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-apple-macosx --cuda-gpu-arch=sm_35 \
|
|
|
|
// RUN: -nocudainc --cuda-path=%S/Inputs/CUDA/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix NOCUDAINC \
|
|
|
|
// RUN: -check-prefix LIBDEVICE -check-prefix LIBDEVICE35
|
|
|
|
|
2015-11-18 06:28:46 +08:00
|
|
|
// We should not add any CUDA include paths if there's no valid CUDA installation
|
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_35 \
|
|
|
|
// RUN: --cuda-path=%S/no-cuda-there %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix NOCUDAINC
|
[CUDA] Driver changes to support CUDA compilation on MacOS.
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
2016-11-18 08:41:22 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-apple-macosx --cuda-gpu-arch=sm_35 \
|
|
|
|
// RUN: --cuda-path=%S/no-cuda-there %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix NOCUDAINC
|
2015-11-18 06:28:46 +08:00
|
|
|
|
2016-08-03 07:12:51 +08:00
|
|
|
// Verify that we get an error if there's no libdevice library to link with.
|
2016-09-29 01:47:40 +08:00
|
|
|
// NOTE: Inputs/CUDA deliberately does *not* have libdevice.compute_20 for this purpose.
|
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_20 \
|
2015-11-18 06:28:50 +08:00
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA/usr/local/cuda %s 2>&1 \
|
2016-08-03 07:12:51 +08:00
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix MISSINGLIBDEVICE
|
[CUDA] Driver changes to support CUDA compilation on MacOS.
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
2016-11-18 08:41:22 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-apple-macosx --cuda-gpu-arch=sm_20 \
|
|
|
|
// RUN: --cuda-path=%S/Inputs/CUDA/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix MISSINGLIBDEVICE
|
2016-08-03 07:12:51 +08:00
|
|
|
|
|
|
|
// Verify that -nocudalib prevents linking libdevice bitcode in.
|
2015-11-18 06:28:50 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_35 \
|
|
|
|
// RUN: -nocudalib --cuda-path=%S/Inputs/CUDA/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix NOLIBDEVICE
|
[CUDA] Driver changes to support CUDA compilation on MacOS.
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
2016-11-18 08:41:22 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-apple-macosx --cuda-gpu-arch=sm_35 \
|
|
|
|
// RUN: -nocudalib --cuda-path=%S/Inputs/CUDA/usr/local/cuda %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON -check-prefix NOLIBDEVICE
|
|
|
|
|
2015-11-18 06:28:55 +08:00
|
|
|
// Verify that we don't add include paths, link with libdevice or
|
2015-12-17 02:51:59 +08:00
|
|
|
// -include __clang_cuda_runtime_wrapper.h without valid CUDA installation.
|
2015-11-18 06:28:55 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-unknown-linux --cuda-gpu-arch=sm_35 \
|
|
|
|
// RUN: --cuda-path=%S/no-cuda-there %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON \
|
|
|
|
// RUN: -check-prefix NOCUDAINC -check-prefix NOLIBDEVICE
|
[CUDA] Driver changes to support CUDA compilation on MacOS.
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
2016-11-18 08:41:22 +08:00
|
|
|
// RUN: %clang -### -v --target=i386-apple-macosx --cuda-gpu-arch=sm_35 \
|
|
|
|
// RUN: --cuda-path=%S/no-cuda-there %s 2>&1 \
|
|
|
|
// RUN: | FileCheck %s -check-prefix COMMON \
|
|
|
|
// RUN: -check-prefix NOCUDAINC -check-prefix NOLIBDEVICE
|
2015-11-18 06:28:50 +08:00
|
|
|
|
2016-08-10 01:27:24 +08:00
|
|
|
// Verify that C++ include paths are passed for both host and device frontends.
|
2016-08-10 03:20:25 +08:00
|
|
|
// RUN: %clang -### -no-canonical-prefixes -target x86_64-linux-gnu %s \
|
2016-08-16 22:38:39 +08:00
|
|
|
// RUN: --stdlib=libstdc++ --sysroot=%S/Inputs/ubuntu_14.04_multiarch_tree2 \
|
|
|
|
// RUN: --gcc-toolchain="" 2>&1 \
|
2016-08-10 01:27:24 +08:00
|
|
|
// RUN: | FileCheck %s --check-prefix CHECK-CXXINCLUDE
|
|
|
|
|
2015-09-24 05:49:39 +08:00
|
|
|
// CHECK: Found CUDA installation: {{.*}}/Inputs/CUDA/usr/local/cuda
|
|
|
|
// NOCUDA-NOT: Found CUDA installation:
|
2015-11-18 06:28:46 +08:00
|
|
|
|
2016-09-29 01:47:40 +08:00
|
|
|
// MISSINGLIBDEVICE: error: cannot find libdevice for sm_20.
|
2016-08-03 07:12:51 +08:00
|
|
|
|
2015-11-18 06:28:46 +08:00
|
|
|
// COMMON: "-triple" "nvptx-nvidia-cuda"
|
|
|
|
// COMMON-SAME: "-fcuda-is-device"
|
2015-11-18 06:28:50 +08:00
|
|
|
// LIBDEVICE-SAME: "-mlink-cuda-bitcode"
|
|
|
|
// NOLIBDEVICE-NOT: "-mlink-cuda-bitcode"
|
2016-08-03 07:12:51 +08:00
|
|
|
// LIBDEVICE20-SAME: libdevice.compute_20.10.bc
|
|
|
|
// LIBDEVICE30-SAME: libdevice.compute_30.10.bc
|
2015-11-18 06:28:50 +08:00
|
|
|
// LIBDEVICE35-SAME: libdevice.compute_35.10.bc
|
2016-09-29 01:47:40 +08:00
|
|
|
// LIBDEVICE50-SAME: libdevice.compute_50.10.bc
|
2015-11-18 06:28:50 +08:00
|
|
|
// NOLIBDEVICE-NOT: libdevice.compute_{{.*}}.bc
|
|
|
|
// LIBDEVICE-SAME: "-target-feature" "+ptx42"
|
|
|
|
// NOLIBDEVICE-NOT: "-target-feature" "+ptx42"
|
2016-08-03 07:12:51 +08:00
|
|
|
// CUDAINC-SAME: "-internal-isystem" "{{.*}}/Inputs/CUDA{{[_0-9]+}}/usr/local/cuda/include"
|
2015-11-18 06:28:46 +08:00
|
|
|
// NOCUDAINC-NOT: "-internal-isystem" "{{.*}}/cuda/include"
|
2015-12-17 02:51:59 +08:00
|
|
|
// CUDAINC-SAME: "-include" "__clang_cuda_runtime_wrapper.h"
|
|
|
|
// NOCUDAINC-NOT: "-include" "__clang_cuda_runtime_wrapper.h"
|
2016-08-16 04:38:52 +08:00
|
|
|
// -internal-externc-isystem flags must come *after* the cuda include flags,
|
|
|
|
// because we must search the cuda include directory first.
|
|
|
|
// CUDAINC-SAME: "-internal-externc-isystem"
|
2015-11-18 06:28:46 +08:00
|
|
|
// COMMON-SAME: "-x" "cuda"
|
2016-08-10 01:27:24 +08:00
|
|
|
// CHECK-CXXINCLUDE: clang{{.*}} "-cc1" "-triple" "nvptx64-nvidia-cuda"
|
|
|
|
// CHECK-CXXINCLUDE-SAME: {{.*}}"-internal-isystem" "{{.+}}/include/c++/4.8"
|
|
|
|
// CHECK-CXXINCLUDE: clang{{.*}} "-cc1" "-triple" "x86_64--linux-gnu"
|
|
|
|
// CHECK-CXXINCLUDE-SAME: {{.*}}"-internal-isystem" "{{.+}}/include/c++/4.8"
|
|
|
|
// CHECK-CXXINCLUDE: ld{{.*}}"
|