to be specified for a template template parameter whenever the parameter is at
least as specialized as the argument (when there's an obvious and correct
mapping from uses of the parameter to uses of the argument). For example, a
template with more parameters can be passed to a template template parameter
with fewer, if those trailing parameters have default arguments.
This is disabled by default, despite being a DR resolution, as it's fairly
broken in its current state: there are no partial ordering rules to cope with
template template parameters that have different parameter lists, meaning that
code that attempts to decompose template-ids based on arity can hit unavoidable
ambiguity issues.
The diagnostics produced on a non-matching argument are also pretty bad right
now, but I aim to improve them in a subsequent commit.
llvm-svn: 290792
Windows uses PE/COFF which is inherently position independent. The use
of the PIC model is unnecessary. In fact, we would generate invalid
code using the ELF PIC model when PIC was enabled previously. Now that
we no longer accept -fPIC and -fpoc, this switches the internal
representation to the static model to permit us to make PIC modules
invalid when targeting Windows. This should not change the code
generation, only the internal state management.
llvm-svn: 290569
Use of these flags would result in the use of ELF-style PIE/PIC code
which is incorrect on Windows. Windows is inherently PIC by means of
the DLL slide that occurs at load. This also mirrors the behaviour on
GCC for MinGW. Currently, the Windows x86_64 forces the relocation
model to PIC (Level 2). This is unchanged for now, though we should
remove any assumptions on that and change it to a static relocation
model.
llvm-svn: 290533
manager, and a code path to use it.
The option is actually a top-level option but does contain
'experimental' in the name. This is the compromise suggested by Richard
in discussions. We expect this option will be around long enough and
have enough users towards the end that it merits not being relegated to
CC1, but it still needs to be clear that this option will go away at
some point.
The backend code is a fresh codepath dedicated to handling the flow with
the new pass manager. This was also Richard's suggested code structuring
to essentially leave a clean path for development rather than carrying
complexity or idiosyncracies of how we do things just to share code with
the parts of this in common with the legacy pass manager. And it turns
out, not much is really in common even though we use the legacy pass
manager for codegen at this point.
I've switched a couple of tests to run with the new pass manager, and
they appear to work. There are still plenty of bugs that need squashing
(just with basic experiments I've found two already!) but they aren't in
this code, and the whole point is to expose the necessary hooks to start
experimenting with the pass manager in more realistic scenarios.
That said, I want to *strongly caution* anyone itching to play with
this: it is still *very shaky*. Several large components have not yet
been shaken down. For example I have bugs in both the always inliner and
inliner that I have already spotted and will be fixing independently.
Still, this is a fun milestone. =D
One thing not in this patch (but that might be very reasonable to add)
is some level of support for raw textual pass pipelines such as what
Sean had a patch for some time ago. I'm mostly interested in the more
traditional flow of getting the IR out of Clang and then running it
through opt, but I can see other use cases so someone may want to add
it.
And of course, *many* features are not yet supported!
- O1 is currently more like O2
- None of the sanitizers are wired up
- ObjC ARC optimizer isn't wired up
- ...
So plenty of stuff still lef to do!
Differential Revision: https://reviews.llvm.org/D28077
llvm-svn: 290450
Much to my surprise, '-disable-llvm-optzns' which I thought was the
magical flag I wanted to get at the raw LLVM IR coming out of Clang
deosn't do that. It still runs some passes over the IR. I don't want
that, I really want the *raw* IR coming out of Clang and I strongly
suspect everyone else using it is in the same camp.
There is actually a flag that does what I want that I didn't know about
called '-disable-llvm-passes'. I suspect many others don't know about it
either. It both does what I want and is much simpler.
This removes the confusing version and makes that spelling of the flag
an alias for '-disable-llvm-passes'. I've also moved everything in Clang
to use the 'passes' spelling as it seems both more accurate (*all* LLVM
passes are disabled, not just optimizations) and much easier to remember
and spell correctly.
This is part of simplifying how Clang drives LLVM to make it cleaner to
wire up to the new pass manager.
Differential Revision: https://reviews.llvm.org/D28047
llvm-svn: 290392
The parameter to ParsePICOpts passed the effective triple and then used
that in a few places and used the actual triple in others. This was
slightly confusing. Rename the parameter to make it more obvious.
llvm-svn: 290303
gtest is a widely-used unit-testing API. It provides macros for unit test
assertions:
ASSERT_TRUE(p != nullptr);
that expand into an if statement that constructs an object representing
the result of the assertion and returns when the assertion is false:
if (AssertionResult gtest_ar_ = AssertionResult(p == nullptr))
;
else
return ...;
Unfortunately, the analyzer does not model the effect of the constructor
precisely because (1) the copy constructor implementation is missing from the
the header (so it can't be inlined) and (2) the boolean-argument constructor
is constructed into a temporary (so the analyzer decides not to inline it since
it doesn't reliably call temporary destructors right now).
This results in false positives because the analyzer does not realize that the
the assertion must hold along the non-return path.
This commit addresses the false positives by explicitly modeling the effects
of the two un-inlined constructors on the AssertionResult state.
I've added a new package, "apiModeling", for these kinds of checkers that
model APIs but don't emit any diagnostics. I envision all the checkers in
this package always being on by default.
This addresses the false positives reported in PR30936.
Differential Revision: https://reviews.llvm.org/D27773
rdar://problem/22705813
llvm-svn: 290143
Summary:
This lets you build with one CUDA installation but use ptxas from
another install.
This is useful e.g. if you want to avoid bugs in an old ptxas without
actually upgrading wholesale to a newer CUDA version.
Reviewers: tra
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D27788
llvm-svn: 289847
Summary:
This implements execute-only support for ARM code generation, which
prevents the compiler from generating data accesses to code sections.
The following changes are involved:
* Add the CodeGen option "-arm-execute-only" to the ARM code generator.
* Add the clang flag "-mexecute-only" as well as the GCC-compatible
alias "-mpure-code" to enable this option.
* When enabled, literal pools are replaced with MOVW/MOVT instructions,
with VMOV used in addition for floating-point literals. As the MOVT
instruction is required, execute-only support is only available in
Thumb mode for targets supporting ARMv8-M baseline or Thumb2.
* Jump tables are placed in data sections when in execute-only mode.
* The execute-only text section is assigned section ID 0, and is
marked as unreadable with the SHF_ARM_PURECODE flag with symbol 'y'.
This also overrides selection of ELF sections for globals.
Reviewers: t.p.northover, rengolin
Subscribers: llvm-commits, aemerson
Differential Revision: https://reviews.llvm.org/D27450
llvm-svn: 289786
Most of the PowerPC64 code generation already creates PIC access. This
changes to a full PIC default, similar to what GCC is doing.
Overall, a monolithic clang binary shrinks by 600KB (about 1%). This can
be a slight regression for TLS access and will use the TOC more
aggressively instead of synthesizing immediates. It is expected to be
performance neutral.
Differential Revision: https://reviews.llvm.org/D26564
llvm-svn: 289744
This change allows setting the default linker used by the Clang
driver when configuring the build.
Differential Revision: https://reviews.llvm.org/D25263
llvm-svn: 289668
Collect the necessary input PCH files.
Do not try to validate the AST before copying it out because if the
crash is in this path, we won't be able to collect it. Instead only
check if it's a file containg an AST.
rdar://problem/27913709
llvm-svn: 289460
Fix the gcc-config code to support multilib gcc installs properly. This
solves two problems: -mx32 using the 64-bit gcc directory (due to matching
installation triple), and -m32 not respecting gcc-config at all (due to
mismatched installation triple).
In order to fix the former issue, split the multilib scan out of
Generic_GCC::GCCInstallationDetector::ScanLibDirForGCCTriple() (the code
is otherwise unchanged), and call it for each installation found via
gcc-config.
In order to fix the latter issue, split the gcc-config processing out of
Generic_GCC::GCCInstallationDetector::init() and repeat it for all
triples, including extra and biarch triples. The only change
in the gcc-config code itself is adding the call to multilib scan.
Convert the gentoo_linux_gcc_multi_version_tree test input to multilib
x86_64+32+x32 install, and add appropriate tests to linux-header-search
and linux-ld.
Differential Revision: https://reviews.llvm.org/D26887
llvm-svn: 289436
I made the wrong assumption that execution would continue after an error Diag
which led to unneeded complex code.
This patch aligns with the better implementation of ToolChain::GetRuntimeLibType.
Differential Revision: https://reviews.llvm.org/D25669
llvm-svn: 289422
This allows us to negate preceding --cuda-gpu-arch=X.
This comes handy when user needs to override default
flags set for them by the build system.
Differential Revision: https://reviews.llvm.org/D27631
llvm-svn: 289287
The most common workflow with module reproducers involves deleting the
module cache before running the script. This happens because leftovers
from the crash are present in the cache and could trigger unrelated and
confusing errors, misleading from the initial reproduction intent.
Change this to point to a clean path but leave the leftovers untouched.
rdar://problem/28655070
llvm-svn: 289176
When -fmodules is on, the reproducer invocation currently leave paths
for include-like flags as is. If the path is relative, the reproducer
doesn't know how to access that file during reproduction time because
the VFS cannot reason about relative paths.
Expand relative paths to absolute ones when creating the reproducer
command line. This allows, for example, the reproducer to work for
crashes while building clang with modules; this wasn't possible before
because building clang requires using relative inc dir from within the
build directory.
rdar://problem/28655070
llvm-svn: 289174
Currently -fstack-protector is on by default when using -ffreestanding.
Change the default behavior to have it off when using -ffreestanding.
rdar://problem/14089363
llvm-svn: 289005
Summary:
The MSVC toolchain and Clang driver combination currently uses a fairly complex
sequence of steps to determine the MS compatibility version to pass to cc1.
There is some oddness in this sequence currently, with some code which inspects
flags in the toolchain, and some code which inspects the triple and local
environment in the driver code.
This change is an attempt to consolidate most of this logic so that
Win32-specific code lives in MSVCToolChain.cpp. I'm not 100% happy with the
split, so any suggestions are welcome.
There are a few things you might want to watch for for specifically:
- On all platforms, if MSVC compatibility flags are provided (and valid), use
those.
- The fallback sequence should be the same as before, but is now consolidated
into MSVCToolChain::getMSVCVersion:
- Otherwise, try to use the Triple.
- Otherwise, on Windows, check the executable.
- Otherwise, on Windows or with --fms-extensions, default to 18.
- Otherwise, we can't determine the version.
- MSVCToolChain::ComputeEffectiveTriple no longer calls the base
ToolChain::ComputeEffectiveClangTriple. The only thing it would change for
Windows the architecture, which we don't care about for the compatibility
version.
- I'm not sure whether this is philosophically correct (but it should
be easy to add back to MSVCToolChain::getMSVCVersionFromTriple if not).
- Previously, Tools.cpp just called getTriple() anyhow, so it doesn't look
like the effective triple was always being used previously anyhow.
Reviewers: hans, compnerd, llvm-commits, rnk
Subscribers: amccarth
Differential Revision: https://reviews.llvm.org/D27477
llvm-svn: 288998
As a first step toward removing Objective-C garbage collection from
Clang, remove support from the driver. I'm hoping this will flush out
any expected bots/configurations/whatever that might rely on it.
I've left the options behind temporarily in -cc1 to keep tests passing.
I'll kill them off entirely in a follow up when I've had a chance to
update/delete the rest of Clang.
llvm-svn: 288872
When integrating compilation database output into existing build
systems, two approaches dominate so far. Ad-hoc implementation of the
JSON output rules or using compiler wrappers. This patch adds a new
option "-MJ foo.json" which gives a slightly cleaned up compilation
record. The output is a fragment, i.e. you still need to add the array
markers, but it allows multiple files to be easy merged.
This way the only change in a build system is adding the option with
potentially a per-target output file and merging the files with
something like
(echo '['; cat *.o.json; echo ']' > compilation_database.json
or some additional filtering to remove the trailing comma for strict
JSON compliance.
Differential Revision: https://reviews.llvm.org/D27140
llvm-svn: 288821
This is to match the behavior of non-LTO;
when -fsave-optimization-record is passed and PGO is available we enable
the generation of hotness information in the optimization records.
Differential Revision: https://reviews.llvm.org/D27332
llvm-svn: 288520
Summary:
This is an improvement on rL288448 where address sanitization was listed
as supported for the CudaToolChain. Since the intent is for the
CudaToolChain not to reject any flags supported by the host compiler,
this patch switches to forwarding the CudaToolChain sanitizer support to
the host toolchain rather than explicitly whitelisting address
sanitization.
Thanks to hfinkel for this suggestion.
Reviewers: jlebar
Subscribers: hfinkel, cfe-commits
Differential Revision: https://reviews.llvm.org/D27351
llvm-svn: 288512
This fixes a bug that was introduced in rL287285. The bug made it
illegal to pass -fsanitize=address during CUDA compilation because the
CudaToolChain class was switched from deriving from the Linux toolchain
class to deriving directly from the ToolChain toolchain class. When
CudaToolChain derived from Linux, it used Linux's getSupportedSanitizers
method, and that method allowed ASAN, but when it switched to deriving
directly from ToolChain, it inherited a getSupportedSanitizers method
that didn't allow for ASAN.
This patch fixes that bug by creating a getSupportedSanitizers method
for CudaToolChain that supports ASAN.
This patch also fixes the test that checks that -fsanitize=address is
passed correctly for CUDA builds. That test didn't used to notice if an
error message was emitted, and that's why it didn't catch this bug when
it was first introduced. With the fix from this patch, that test will
now catch any similar bug in the future.
llvm-svn: 288448
Summary: This patch adds a check and an error message to gnutools::Linker::ConstructJob in case the architecture is not supported. For most other operating systems, the error message is created in lib/Basic/Targets.cpp:AllocateTarget, but when construction the linker arguments for the gnutools linker a supported architecture is required.
Reviewers: rafael, joerg, echristo
Subscribers: mehdi_amini, joerg, dschuff, cfe-commits
Differential Revision: https://reviews.llvm.org/D27066
llvm-svn: 288327
Fix recognizing newer OpenSUSE versions that combine the two version
components into 'VERSION = x.y'. The check was written against an older
version that kept those two split as VERSION and PATCHLEVEL.
Differential Revision: https://reviews.llvm.org/D26850
llvm-svn: 288061
Refactor the Distro enum along with helper functions into a full-fledged
Distro class, inspired by llvm::Triple, and make it a public API.
The new class wraps the enum with necessary comparison operators, adding
the convenience Is*() methods and a constructor performing
the detection. The public API is needed to run the unit tests (D25869).
Differential Revision: https://reviews.llvm.org/D25949
llvm-svn: 288060
https://reviews.llvm.org/D25932 made it so that clang always checks if
libLTO.dylib is present on disk, even if -flto is not being used. The
motivation for that change was that if a dependency happens to contain bitcode,
ld64 will try to load libLTO without -flto explicitly being enabled. However,
the change had the undesirable side effect of warning if libLTO.dylib doesn't
exist even if it isn't needed.
Change things so that -lto_library is always passes, independent of if it
exists or not. ld64 only looks at this flag if it uses LTO. If the dylib
exists, all is well. If it doesn't, and LTO is not being used, all is well too.
If ld64 does end up using LTO and the dylib does not exist, ld64 will print
something like
ld: could not process llvm bitcode object file, because foo/libLTO.dylib could not be loaded file 'test.o' for architecture x86_64
https://reviews.llvm.org/D26984
llvm-svn: 287685
Summary:
Compiling CUDA device code requires us to know the host toolchain,
because CUDA device-side compiles pull in e.g. host headers.
When we only supported Linux compilation, this worked because
CudaToolChain, which is responsible for device-side CUDA compilation,
inherited from the Linux toolchain. But in order to support MacOS,
CudaToolChain needs to take a HostToolChain pointer.
Because a CUDA toolchain now requires a host TC, we no longer will
create a CUDA toolchain from Driver::getToolChain -- you have to go
through CreateOffloadingDeviceToolChains. I am *pretty* sure this is
correct, and that previously any attempt to create a CUDA toolchain
through getToolChain() would eventually have resulted in us throwing
"error: unsupported use of NVPTX for host compilation".
In any case hacking getToolChain to create a CUDA+host toolchain would
be wrong, because a Driver can be reused for multiple compilations,
potentially with different host TCs, and getToolChain will cache the
result, causing us to potentially use a stale host TC.
So that's the main change in this patch.
In addition, we have to pull CudaInstallationDetector out of Generic_GCC
and into a top-level class. It's now used by the Generic_GCC and MachO
toolchains.
Reviewers: tra
Subscribers: rryan, hfinkel, sfantao
Differential Revision: https://reviews.llvm.org/D26774
llvm-svn: 287285
In addition to the preprocessed sources file and reproducer script, also
point to the .crash diagnostic files on Darwin. Example:
PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
Preprocessed source(s) and associated run script(s) are located at:
clang-4.0: note: diagnostic msg: /var/folders/bk/1hj20g8j4xvdj5gd25ywhd3m0000gq/T/RegAllocGreedy-238f28.cpp
clang-4.0: note: diagnostic msg: /var/folders/bk/1hj20g8j4xvdj5gd25ywhd3m0000gq/T/RegAllocGreedy-238f28.cache
clang-4.0: note: diagnostic msg: /var/folders/bk/1hj20g8j4xvdj5gd25ywhd3m0000gq/T/RegAllocGreedy-238f28.sh
clang-4.0: note: diagnostic msg: /var/folders/bk/1hj20g8j4xvdj5gd25ywhd3m0000gq/T/RegAllocGreedy-238f28.crash
When no match is found for the .crash, point the user to a directory
where those can be found. Example:
clang-4.0: note: diagnostic msg: Crash backtrace is located in
clang-4.0: note: diagnostic msg: /Users/bruno/Library/Logs/DiagnosticReports/clang-4.0_<YYYY-MM-DD-HHMMSS>_<hostname>.crash
clang-4.0: note: diagnostic msg: (choose the .crash file that corresponds to your crash)
rdar://problem/27286266
llvm-svn: 287262
Summary:
-fembed-bitcode infers -bitcode_bundle to ld64 but it is not correctly
passed when using LTO. LTO is a special case of -fembed-bitcode which
it doesn't require embed the bitcode in a special section in the object
file but it requires linker to save that as part of the final executable.
rdar://problem/29274226
Reviewers: mehdi_amini
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D26690
llvm-svn: 287084