Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
Summary: This patch adds an additional flag to the OpenMP device offloading toolchain to link in the runtime library bitcode.
Reviewers: Hahnfeld, ABataev, carlo.bertolli, caomhin, grokos, hfinkel
Reviewed By: ABataev, grokos
Subscribers: jholewinski, guansong, cfe-commits
Differential Revision: https://reviews.llvm.org/D43197
llvm-svn: 327460
Summary: This patch adds an additional flag to the OpenMP device offloading toolchain to link in the runtime library bitcode.
Reviewers: Hahnfeld, ABataev, carlo.bertolli, caomhin, grokos, hfinkel
Reviewed By: ABataev, grokos
Subscribers: jholewinski, guansong, cfe-commits
Differential Revision: https://reviews.llvm.org/D43197
llvm-svn: 327438
As a first step, pass '-c/--compile-only' to ptxas so that it
doesn't complain about references to external function. This
will successfully generate object files, but they won't work
at runtime because the registration routines need to adapted.
Differential Revision: https://reviews.llvm.org/D42921
llvm-svn: 324878
If the CUDA toolkit is not installed to its default locations
in /usr/local/cuda, the user is forced to specify --cuda-path.
This is tedious and the driver can be smarter if well-known tools
(like ptxas) can already be found in the PATH environment variable.
Add option --cuda-path-ignore-env if the user wants to ignore
set environment variables. Also use it in the tests to make sure
the driver always finds the same CUDA installation, regardless
of the user's environment.
Differential Revision: https://reviews.llvm.org/D42642
llvm-svn: 323848
Clang can use CUDA-9.1 now, though new APIs (are not implemented yet.
The major change is that headers in CUDA-9.1 went through substantial
changes that started in CUDA-9.0 which required substantial changes
in the cuda compatibility headers provided by clang.
There are two major issues:
* CUDA SDK no longer provides declarations for libdevice functions.
* A lot of device-side functions have become nvcc's builtins and
CUDA headers no longer contain their implementations.
This patch changes the way CUDA headers are handled if we compile
with CUDA 9.x. Both 9.0 and 9.1 are affected.
* Clang provides its own declarations of libdevice functions.
* For CUDA-9.x clang now provides implementation of device-side
'standard library' functions using libdevice.
This patch should not affect compilation with CUDA-8. There may be
some observable differences for CUDA-9.0, though they are not expected
to affect functionality.
Tested: CUDA test-suite tests for all supported combinations of:
CUDA: 7.0,7.5,8.0,9.0,9.1
GPU: sm_20, sm_35, sm_60, sm_70
Differential Revision: https://reviews.llvm.org/D42513
llvm-svn: 323713
../tools/clang/lib/Driver/ToolChains/Cuda.cpp:80:18: error: reference to non-static member function must be called; did you mean to call it with no arguments?
if (Distro(D.getVFS).IsDebian())
~~^~~~~~
()
llvm-svn: 319322
This was previously done in some places, but for example not for
bundling so that single object compilation with -c failed. In
addition cubin was used for all file types during unbundling which
is incorrect for assembly files that are passed to ptxas.
Tighten up the tests so that we can't regress in that area.
Differential Revision: https://reviews.llvm.org/D40250
llvm-svn: 318763
Summary:
CUDA 9's minimum sm is sm_30.
Ideally we should also make sm_30 the default when compiling with CUDA
9, but that seems harder than it should be.
Subscribers: sanjoy
Differential Revision: https://reviews.llvm.org/D39109
llvm-svn: 316611
For the shuffle instructions in reductions we need at least sm_30
but the user may want to customize the default architecture.
Differential Revision: https://reviews.llvm.org/D38883
llvm-svn: 315996
If the user passes -nocudalib, we can live without it being present.
Simplify the code by just checking whether LibDeviceMap is empty.
Differential Revision: https://reviews.llvm.org/D38901
llvm-svn: 315902
Summary: If we only use the compiler front-end, do not throw an error about the cuda device library not being found. This allows the front-end to be run on systems where no Cuda installation is found.
Reviewers: Hahnfeld, ABataev, carlo.bertolli, caomhin, tra
Reviewed By: tra
Subscribers: hfinkel, tra, cfe-commits
Differential Revision: https://reviews.llvm.org/D37914
llvm-svn: 314217
Summary: Enable the -nocudalib flag for the OpenMP device offloading toolchain as well. Currently it can only be used for the CUDA toolchain.
Reviewers: Hahnfeld, ABataev, carlo.bertolli, caomhin, hfinkel, tra
Reviewed By: tra
Subscribers: hfinkel, cfe-commits
Differential Revision: https://reviews.llvm.org/D37913
llvm-svn: 314164
Summary: When composing the output file name, the path to the file is being dropped. The full path is required.
Reviewers: Hahnfeld, ABataev, caomhin, carlo.bertolli, hfinkel, tra
Reviewed By: tra
Subscribers: hfinkel, tra, cfe-commits
Differential Revision: https://reviews.llvm.org/D37912
llvm-svn: 314156
Summary: If we only use the compiler front-end, do not throw an error about the cuda device library not being found. This allows the front-end to be run on systems where no Cuda installation is found.
Reviewers: Hahnfeld, ABataev, carlo.bertolli, caomhin, tra
Reviewed By: tra
Subscribers: hfinkel, tra, cfe-commits
Differential Revision: https://reviews.llvm.org/D37914
llvm-svn: 314150
For now CUDA-9 is not included in the list of CUDA versions clang
searches for, so the path to CUDA-9 must be explicitly passed
via --cuda-path=.
On LLVM side NVPTX added sm_70 GPU type which bumps required
PTX version to 6.0, but otherwise is equivalent to sm_62 at the moment.
Differential Revision: https://reviews.llvm.org/D37576
llvm-svn: 312734
Create a separate test file to contain all tests for OpenMP
offloading to GPUs.
Make libdevice checking more robust by accounting for
the case in which no libdevice is found.
This changes are in connrection with diff: D29660
llvm-svn: 310718
Summary: Invoking the compiler inside a script causes the clang-offload-bundler executable to not be found. This patch enables the lookup for executables in the driver directory where the clang-offload-bundler resides.
Reviewers: hfinkel, carlo.bertolli, arpith-jacob, ABataev, caomhin
Reviewed By: hfinkel
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D36537
llvm-svn: 310513
Summary:
This flag "--fopenmp-ptx=" enables the overwriting of the default PTX version used for GPU offloaded OpenMP target regions: "+ptx42".
Reviewers: arpith-jacob, caomhin, carlo.bertolli, ABataev, Hahnfeld, jlebar, hfinkel, tstellar
Reviewed By: ABataev
Subscribers: rengolin, cfe-commits
Differential Revision: https://reviews.llvm.org/D29660
llvm-svn: 310489
Summary: Previously we have added the "-c" flag which gets passed to PTXAS by default to generate relocatable OpenMP target code by default. This set of flags exposes control over this behaviour.
Reviewers: arpith-jacob, caomhin, carlo.bertolli, ABataev, Hahnfeld, jlebar, hfinkel, tstellar
Reviewed By: ABataev
Subscribers: Hahnfeld, rengolin, cfe-commits
Differential Revision: https://reviews.llvm.org/D29659
llvm-svn: 310484
The commit r310291 introduced the failure. r310332 was a test fix commit and
r310300 was a followup commit. I reverted these two to avoid merge conflicts
when reverting.
The 'openmp-offload.c' test is failing on Darwin because the following
run lines:
// RUN: touch %t1.o
// RUN: touch %t2.o
// RUN: %clang -### -no-canonical-prefixes -fopenmp=libomp -fopenmp-targets=nvptx64-nvidia-cuda -save-temps -no-canonical-prefixes %t1.o %t2.o 2>&1 \
// RUN: | FileCheck -check-prefix=CHK-TWOCUBIN %s
trigger the following assertion:
Driver.cpp:3418:
assert(CachedResults.find(ActionTC) != CachedResults.end() &&
"Result does not exist??");
llvm-svn: 310345
Summary: When device offloading is enabled and the device is an NVIDIA GPU, OpenMP target regions must be compiled with relocation enabled by passing the "-c" flag to the PTXAS invocation.
Reviewers: arpith-jacob, caomhin, carlo.bertolli, ABataev, Hahnfeld, jlebar, hfinkel, tstellar
Reviewed By: Hahnfeld
Subscribers: Hahnfeld, rengolin, mkuron, cfe-commits
Differential Revision: https://reviews.llvm.org/D29642
llvm-svn: 310300
Summary: When compiling code being offloaded by OpenMP to an NVIDIA GPU, pass the -v to PTXAS if it was passed to the CLANG driver.
Reviewers: arpith-jacob, caomhin, carlo.bertolli, ABataev, jlebar, hfinkel, tstellar
Reviewed By: jlebar
Subscribers: Hahnfeld, rengolin, cfe-commits
Differential Revision: https://reviews.llvm.org/D29644
llvm-svn: 310295
Summary:
OpenMP has the ability to offload target regions to devices which may have different architectures.
A new -fopenmp-target-arch flag is introduced to specify the device architecture.
In this patch I use the new flag to specify the compute capability of the underlying NVIDIA architecture for the OpenMP offloading CUDA tool chain.
Only a host-offloading test is provided since full device offloading capability will only be available when [[ https://reviews.llvm.org/D29654 | D29654 ]] lands.
Reviewers: hfinkel, Hahnfeld, carlo.bertolli, caomhin, ABataev
Reviewed By: hfinkel
Subscribers: guansong, cfe-commits
Tags: #openmp
Differential Revision: https://reviews.llvm.org/D34784
llvm-svn: 310263
Summary: Pass the type of the device offloading when building the tool chain for a particular target architecture. This is required when supporting multiple tool chains that target a single device type. In our particular use case, the OpenMP and CUDA tool chains will use the same ```addClangTargetOptions ``` method. This enables the reuse of common options and ensures control over options only supported by a particular tool chain.
Reviewers: arpith-jacob, caomhin, carlo.bertolli, ABataev, jlebar, hfinkel, tstellar, Hahnfeld
Reviewed By: hfinkel
Subscribers: jgravelle-google, aheejin, rengolin, jfb, dschuff, sbc100, cfe-commits
Differential Revision: https://reviews.llvm.org/D29647
llvm-svn: 307272
Summary:
(This is a move-only refactoring patch. There are no functionality changes.)
This patch splits apart the Clang driver's tool and toolchain implementation
files. Each target platform toolchain is moved to its own file, along with the
closest-related tools. Each target platform toolchain has separate headers and
implementation files, so the hierarchy of classes is unchanged.
There are some remaining shared free functions, mostly from Tools.cpp. Several
of these move to their own architecture-specific files, similar to r296056. Some
of them are only used by a single target platform; since the tools and
toolchains are now together, some helpers now live in a platform-specific file.
The balance are helpers related to manipulating argument lists, so they are now
in a new file pair, CommonArgs.h and .cpp.
I've tried to cluster the code logically, which is fairly straightforward for
most of the target platforms and shared architectures. I think I've made
reasonable choices for these, as well as the various shared helpers; but of
course, I'm happy to hear feedback in the review.
There are some particular things I don't like about this patch, but haven't been
able to find a better overall solution. The first is the proliferation of files:
there are several files that are tiny because the toolchain is not very
different from its base (usually the Gnu tools/toolchain). I think this is
mostly a reflection of the true complexity, though, so it may not be "fixable"
in any reasonable sense. The second thing I don't like are the includes like
"../Something.h". I've avoided this largely by clustering into the current file
structure. However, a few of these includes remain, and in those cases it
doesn't make sense to me to sink an existing file any deeper.
Reviewers: rsmith, mehdi_amini, compnerd, rnk, javed.absar
Subscribers: emaste, jfb, danalbert, srhines, dschuff, jyknight, nemanjai, nhaehnle, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D30372
llvm-svn: 297250