This is needed so lld-link can find clang_rt.profile when self hosting
on Windows with PGO. Using clang-cl as a linker knows to add the library
but self hosting, using -DCMAKE_LINKER=<...>/lld-link.exe doesn't.
Differential Revision: https://reviews.llvm.org/D61742
llvm-svn: 360674
Summary: This patches fixes an issue in which the __clang_cuda_cmath.h header is being included even when cmath or math.h headers are not included.
Reviewers: jdoerfert, ABataev, hfinkel, caomhin, tra
Reviewed By: tra
Subscribers: tra, mgorny, guansong, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D61765
llvm-svn: 360626
Summary:
In this patch we propose a temporary solution to resolving math functions for the NVPTX toolchain, temporary until OpenMP variant is supported by Clang.
We intercept the inclusion of math.h and cmath headers and if we are in the OpenMP-NVPTX case, we re-use CUDA's math function resolution mechanism.
Authors:
@gtbercea
@jdoerfert
Reviewers: hfinkel, caomhin, ABataev, tra
Reviewed By: hfinkel, ABataev, tra
Subscribers: JDevlieghere, mgorny, guansong, cfe-commits, jdoerfert
Tags: #clang
Differential Revision: https://reviews.llvm.org/D61399
llvm-svn: 360265
This commit appears to be breaking stage-2 builds on GreenDragon. The
OpenMP wrappers for cmath and math.h are copied into the root of the
resource directory and cause a cyclic dependency in module 'Darwin':
Darwin -> std -> Darwin. This blows up when CMake is testing for modules
support and breaks all stage 2 module builds, including the ThinLTO bot
and all LLDB bots.
CMake Error at cmake/modules/HandleLLVMOptions.cmake:497 (message):
LLVM_ENABLE_MODULES is not supported by this compiler
llvm-svn: 360192
Summary:
In this patch we propose a temporary solution to resolving math functions for the NVPTX toolchain, temporary until OpenMP variant is supported by Clang.
We intercept the inclusion of math.h and cmath headers and if we are in the OpenMP-NVPTX case, we re-use CUDA's math function resolution mechanism.
Authors:
@gtbercea
@jdoerfert
Reviewers: hfinkel, caomhin, ABataev, tra
Reviewed By: hfinkel, ABataev, tra
Subscribers: mgorny, guansong, cfe-commits, jdoerfert
Tags: #clang
Differential Revision: https://reviews.llvm.org/D61399
llvm-svn: 360063
Summary:
When -gsplit-dwarf is used together with other -g options, in most cases
the computed debug info level is decided by the last -g option, with one
special case (see below). This patch drops that special case and thus
makes it easy to reason about:
// If a lower debug level -g comes after -gsplit-dwarf, in some cases
// -gsplit-dwarf is cancelled.
-gsplit-dwarf -g0 => 0
-gsplit-dwarf -gline-directives-only => DebugDirectivesOnly
-gsplit-dwarf -gmlt -fsplit-dwarf-inlining => 1
-gsplit-dwarf -gmlt -fno-split-dwarf-inlining => 1 + split
// If -gsplit-dwarf comes after -g options, with this patch, the net
// effect is 2 + split for all combinations
-g0 -gsplit-dwarf => 2 + split
-gline-directives-only -gsplit-dwarf => 2 + split
-gmlt -gsplit-dwarf -fsplit-dwarf-inlining => 2 + split
-gmlt -gsplit-dwarf -fno-split-dwarf-inlining => 1 + split (before) 2 + split (after)
The last case has been changed. In general, if the user intends to lower
debug info level, place that -g option after -gsplit-dwarf.
Some context:
In gcc, the last of -gsplit-dwarf -g0 -g1 -g2 -g3 -ggdb[0-3] -gdwarf-*
... decides the debug info level (-gsplit-dwarf -gdwarf-* have level 2).
It is a bit unfortunate that -gsplit-dwarf -gdwarf-* ... participate in
the level computation but that is the status quo.
Reviewers: dblaikie, echristo, probinson
Reviewed By: dblaikie, probinson
Subscribers: probinson, aprantl, jdoerfert, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D59923
llvm-svn: 358544
LLDB can't currently handle Clang's default (limit/no-standalone) DWARF,
so platforms that default to LLDB (Darwin) or anyone else manually
requesting LLDB tuning, should also get standalone DWARF.
That doesn't mean a user can't explicitly enable (because they have
other reasons to prefer standalone DWARF (such as that they're only
building half their application with debug info enabled, and half
without - or because they're tuning for GDB, but want to be able to use
it under LLDB too (this is the default on FreeBSD))) or disable (testing
LLDB fixes/improvements that handle no-standalone mode, building C code,
perhaps, which wouldn't have the LLDB<>no-standalone conflict, etc) the
feature regardless of the tuning.
llvm-svn: 358464
This change adds hierarchical "time trace" profiling blocks that can be visualized in Chrome, in a "flame chart" style. Each profiling block can have a "detail" string that for example indicates the file being processed, template name being instantiated, function being optimized etc.
This is taken from GitHub PR: https://github.com/aras-p/llvm-project-20170507/pull/2
Patch by Aras Pranckevičius.
Differential Revision: https://reviews.llvm.org/D58675
llvm-svn: 357340
In gcc, -gsplit-dwarf is handled in gcc/gcc.c as a spec
(ASM_FINAL_SPEC): objcopy --extract-dwo + objcopy --strip-dwo. In
gcc/opts.c, -gsplit_dwarf has the same semantic of a -g. Except for the
availability of the external command 'objcopy', nothing precludes the
feature working on other ELF OSes. llvm doesn't use objcopy, so it doesn't
have to exclude other OSes.
llvm-svn: 357150
The RISC-V assembler needs the target ABI because it defines a flag of the ELF
file, as described in [1].
Make clang (the driver) to pass the target ABI to -cc1as in exactly the same
way it does for -cc1.
Currently -cc1as knows about -target-abi but is not handling it. Handle it and
pass it to the MC layer via MCTargetOptions.
[1] https://github.com/riscv/riscv-elf-psabi-doc/blob/master/riscv-elf.md#file-header
Differential Revision: https://reviews.llvm.org/D59298
llvm-svn: 356981
-malign-double is currently only implemented in the -cc1 interface. But its declared in Options.td so it is a driver option too. But you try to use it with the driver you'll get a message about the option being unused.
This patch teaches the driver to pass the option through to cc1 so it won't be unused. The Options.td says the option is x86 only but I didn't see any x86 specific code in its impementation in cc1 so not sure if the documentation is wrong or if I should only pass this option through the driver on x86 targets.
Differential Revision: https://reviews.llvm.org/D59624
llvm-svn: 356706
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.
This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`
will only emit the remarks coming from the pass `inline`.
This adds:
* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin
Differential Revision: https://reviews.llvm.org/D59268
Original llvm-svn: 355964
llvm-svn: 355984
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.
This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`
will only emit the remarks coming from the pass `inline`.
This adds:
* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin
Differential Revision: https://reviews.llvm.org/D59268
llvm-svn: 355964
When -forder-file-instrumentation is on, we pass llvm flag to enable the order file instrumentation pass.
https://reviews.llvm.org/D58751
llvm-svn: 355333
Part 1 of CSPGO change in Clang. This includes changes in clang options
and calls to llvm PassManager. Tests will be committed in part2.
This change needs the PassManager change in llvm.
Differential Revision: https://reviews.llvm.org/D54176
llvm-svn: 355331
Summary:
In the clang UI, replaces -mthread-model posix with -matomics as the
source of truth on threading. In the backend, replaces
-thread-model=posix with the atomics target feature, which is now
collected on the WebAssemblyTargetMachine along with all other used
features. These collected features will also be used to emit the
target features section in the future.
The default configuration for the backend is thread-model=posix and no
atomics, which was previously an invalid configuration. This change
makes the default valid because the thread model is ignored.
A side effect of this change is that objects are never emitted with
passive segments. It will instead be up to the linker to decide
whether sections should be active or passive based on whether atomics
are used in the final link.
Reviewers: aheejin, sbc100, dschuff
Subscribers: mehdi_amini, jgravelle-google, hiraditya, sunfish, steven_wu, dexonsmith, rupprecht, jfb, jdoerfert, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D58742
llvm-svn: 355112
A faster way to reduce the values in teams reductions was found, the
codegen is updated to use this faster algorithm and new runtime functions.
llvm-svn: 354479
This adds ACLE-defined macros to test for code being compiled in the ROPI and
RWPI position-independence modes.
Differential revision: https://reviews.llvm.org/D23610
llvm-svn: 354265
Summary:
There have been three options related to threads and users had to set
all three of them separately to get the correct compilation results.
This makes sure the relationship between the options makes sense and
sets necessary options for users if only part of the necessary options
are specified. This does:
- Remove `-matomics`; this option alone does not enable anything, so
removed it to not confuse users.
- `-mthread-model posix` sets `-target-feature +atomics`
- `-pthread` sets both `-target-feature +atomics` and
`-mthread-model posix`
Also errors out when explicitly given options don't match, such as
`-pthread` is given with `-mthread-model single`.
Reviewers: dschuff, sbc100, tlively, sunfish
Subscribers: jgravelle-google, jfb, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D57874
llvm-svn: 353761
This is suggested by 3.3.9 of MSP430 EABI document.
We do allow user to manually enable frame pointer. GCC toolchain uses the same behavior.
Patch by Dmitry Mikushev!
Differential Revision: https://reviews.llvm.org/D56925
llvm-svn: 353212
Summary:
This adds support for new-PM plugin loading to clang. The option
`-fpass-plugin=` may be used to specify a dynamic shared object file
that adheres to the PassPlugin API.
Tested: created simple plugin that registers an EP callback; with optimization level > 0, the pass is run as expected.
Committed on behalf of Marco Elver
Differential Revision: https://reviews.llvm.org/D56935
llvm-svn: 352972
..and use it to control that parts of CUDA compilation
that depend on the specific version of CUDA SDK.
This patch has a placeholder for a 'new launch API' support
which is in a separate patch. The list will be further
extended in the upcoming patch to support CUDA-10.1.
Differential Revision: https://reviews.llvm.org/D57487
llvm-svn: 352798
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
These two options enable/disable emission of R_{MICRO}MIPS_JALR fixups along
with PIC calls. The linker may then try to turn PIC calls into direct jumps.
By default, these fixups do get emitted by the backend, use
'-mno-relax-pic-calls' to omit them.
Differential revision: https://reviews.llvm.org/D56878
llvm-svn: 351579
This is an initial implementation for msp430 toolchain including
-mmcu option support
-mhwmult options support
-integrated-as by default
The toolchain uses msp430-elf-as as a linker and supports msp430-gcc toolchain tree.
Patch by Kristina Bessonova!
Differential Revision: https://reviews.llvm.org/D56658
llvm-svn: 351228
Summary:
Adds a new -f[no]split-lto-unit flag that is disabled by default to
control module splitting during ThinLTO. It is automatically enabled
for -fsanitize=cfi and -fwhole-program-vtables.
The new EnableSplitLTOUnit codegen flag is passed down to llvm
via a new module flag of the same name.
Depends on D53890.
Reviewers: pcc
Subscribers: ormris, mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, cfe-commits, llvm-commits
Differential Revision: https://reviews.llvm.org/D53891
llvm-svn: 350949
Summary: Introduce a compiler flag for cases when the user knows that the collapsed loop counter can be safely represented using at most 32 bits. This will prevent the emission of expensive mathematical operations (such as the div operation) on the iteration variable using 64 bits where 32 bit operations are sufficient.
Reviewers: ABataev, caomhin
Reviewed By: ABataev
Subscribers: hfinkel, kkwli0, guansong, cfe-commits
Differential Revision: https://reviews.llvm.org/D55928
llvm-svn: 350758
Gentoo supports combining clang toolchain with GNU binutils, and many
users actually do that. As -faddrsig is not supported by GNU strip,
this results in a lot of warnings. Disable it by default and let users
enable it explicitly if they want it; with the intent of reevaluating
when the underlying feature becomes standarized.
See also: https://bugs.gentoo.org/667854
Differential Revision: https://reviews.llvm.org/D56047
llvm-svn: 350028
If an -analyzer-config is passed through -Xanalyzer, it is not found while
looking for -Xclang.
Additionally, don't emit -analyzer-config-compatibility-mode for *every*
-analyzer-config flag we encounter; one is enough.
https://reviews.llvm.org/D55823
rdar://problem/46504165
llvm-svn: 349866
Since r348038 we emit an error every time an -analyzer-config option is not
found. The driver, however, suppresses this error with another flag,
-analyzer-config-compatibility-mode, so backwards compatibility is maintained,
while analyzer developers still enjoy the new typo-free experience.
The backwards compatibility turns out to be still broken when the -analyze
action is not specified; it is still possible to specify -analyzer-config
in that case. This should be fixed now.
Patch by Kristóf Umann!
Differential Revision: https://reviews.llvm.org/D55823
rdar://problem/46504165
llvm-svn: 349824
Replace multiple comparisons of getOS() value with FreeBSD, NetBSD,
OpenBSD and DragonFly with matching isOS*BSD() methods. This should
improve the consistency of coding style without changing the behavior.
Direct getOS() comparisons were left whenever used in switch or switch-
like context.
Differential Revision: https://reviews.llvm.org/D55916
llvm-svn: 349752
Avoid passing -faddrsig by default on NetBSD. This platform is still
using old GNU binutils that crashes on executables containing those
sections.
Differential Revision: https://reviews.llvm.org/D55828
llvm-svn: 349647
NFC for targets other than PS4.
Respect -nostdlib and -nodefaultlibs when enabling asan or ubsan.
Differential Revision: https://reviews.llvm.org/D55712
llvm-svn: 349508
Summary:
Add an option to initialize automatic variables with either a pattern or with
zeroes. The default is still that automatic variables are uninitialized. Also
add attributes to request uninitialized on a per-variable basis, mainly to disable
initialization of large stack arrays when deemed too expensive.
This isn't meant to change the semantics of C and C++. Rather, it's meant to be
a last-resort when programmers inadvertently have some undefined behavior in
their code. This patch aims to make undefined behavior hurt less, which
security-minded people will be very happy about. Notably, this means that
there's no inadvertent information leak when:
- The compiler re-uses stack slots, and a value is used uninitialized.
- The compiler re-uses a register, and a value is used uninitialized.
- Stack structs / arrays / unions with padding are copied.
This patch only addresses stack and register information leaks. There's many
more infoleaks that we could address, and much more undefined behavior that
could be tamed. Let's keep this patch focused, and I'm happy to address related
issues elsewhere.
To keep the patch simple, only some `undef` is removed for now, see
`replaceUndef`. The padding-related infoleaks are therefore not all gone yet.
This will be addressed in a follow-up, mainly because addressing padding-related
leaks should be a stand-alone option which is implied by variable
initialization.
There are three options when it comes to automatic variable initialization:
0. Uninitialized
This is C and C++'s default. It's not changing. Depending on code
generation, a programmer who runs into undefined behavior by using an
uninialized automatic variable may observe any previous value (including
program secrets), or any value which the compiler saw fit to materialize on
the stack or in a register (this could be to synthesize an immediate, to
refer to code or data locations, to generate cookies, etc).
1. Pattern initialization
This is the recommended initialization approach. Pattern initialization's
goal is to initialize automatic variables with values which will likely
transform logic bugs into crashes down the line, are easily recognizable in
a crash dump, without being values which programmers can rely on for useful
program semantics. At the same time, pattern initialization tries to
generate code which will optimize well. You'll find the following details in
`patternFor`:
- Integers are initialized with repeated 0xAA bytes (infinite scream).
- Vectors of integers are also initialized with infinite scream.
- Pointers are initialized with infinite scream on 64-bit platforms because
it's an unmappable pointer value on architectures I'm aware of. Pointers
are initialize to 0x000000AA (small scream) on 32-bit platforms because
32-bit platforms don't consistently offer unmappable pages. When they do
it's usually the zero page. As people try this out, I expect that we'll
want to allow different platforms to customize this, let's do so later.
- Vectors of pointers are initialized the same way pointers are.
- Floating point values and vectors are initialized with a negative quiet
NaN with repeated 0xFF payload (e.g. 0xffffffff and 0xffffffffffffffff).
NaNs are nice (here, anways) because they propagate on arithmetic, making
it more likely that entire computations become NaN when a single
uninitialized value sneaks in.
- Arrays are initialized to their homogeneous elements' initialization
value, repeated. Stack-based Variable-Length Arrays (VLAs) are
runtime-initialized to the allocated size (no effort is made for negative
size, but zero-sized VLAs are untouched even if technically undefined).
- Structs are initialized to their heterogeneous element's initialization
values. Zero-size structs are initialized as 0xAA since they're allocated
a single byte.
- Unions are initialized using the initialization for the largest member of
the union.
Expect the values used for pattern initialization to change over time, as we
refine heuristics (both for performance and security). The goal is truly to
avoid injecting semantics into undefined behavior, and we should be
comfortable changing these values when there's a worthwhile point in doing
so.
Why so much infinite scream? Repeated byte patterns tend to be easy to
synthesize on most architectures, and otherwise memset is usually very
efficient. For values which aren't entirely repeated byte patterns, LLVM
will often generate code which does memset + a few stores.
2. Zero initialization
Zero initialize all values. This has the unfortunate side-effect of
providing semantics to otherwise undefined behavior, programs therefore
might start to rely on this behavior, and that's sad. However, some
programmers believe that pattern initialization is too expensive for them,
and data might show that they're right. The only way to make these
programmers wrong is to offer zero-initialization as an option, figure out
where they are right, and optimize the compiler into submission. Until the
compiler provides acceptable performance for all security-minded code, zero
initialization is a useful (if blunt) tool.
I've been asked for a fourth initialization option: user-provided byte value.
This might be useful, and can easily be added later.
Why is an out-of band initialization mecanism desired? We could instead use
-Wuninitialized! Indeed we could, but then we're forcing the programmer to
provide semantics for something which doesn't actually have any (it's
uninitialized!). It's then unclear whether `int derp = 0;` lends meaning to `0`,
or whether it's just there to shut that warning up. It's also way easier to use
a compiler flag than it is to manually and intelligently initialize all values
in a program.
Why not just rely on static analysis? Because it cannot reason about all dynamic
code paths effectively, and it has false positives. It's a great tool, could get
even better, but it's simply incapable of catching all uses of uninitialized
values.
Why not just rely on memory sanitizer? Because it's not universally available,
has a 3x performance cost, and shouldn't be deployed in production. Again, it's
a great tool, it'll find the dynamic uses of uninitialized variables that your
test coverage hits, but it won't find the ones that you encounter in production.
What's the performance like? Not too bad! Previous publications [0] have cited
2.7 to 4.5% averages. We've commmitted a few patches over the last few months to
address specific regressions, both in code size and performance. In all cases,
the optimizations are generally useful, but variable initialization benefits
from them a lot more than regular code does. We've got a handful of other
optimizations in mind, but the code is in good enough shape and has found enough
latent issues that it's a good time to get the change reviewed, checked in, and
have others kick the tires. We'll continue reducing overheads as we try this out
on diverse codebases.
Is it a good idea? Security-minded folks think so, and apparently so does the
Microsoft Visual Studio team [1] who say "Between 2017 and mid 2018, this
feature would have killed 49 MSRC cases that involved uninitialized struct data
leaking across a trust boundary. It would have also mitigated a number of bugs
involving uninitialized struct data being used directly.". They seem to use pure
zero initialization, and claim to have taken the overheads down to within noise.
Don't just trust Microsoft though, here's another relevant person asking for
this [2]. It's been proposed for GCC [3] and LLVM [4] before.
What are the caveats? A few!
- Variables declared in unreachable code, and used later, aren't initialized.
This goto, Duff's device, other objectionable uses of switch. This should
instead be a hard-error in any serious codebase.
- Volatile stack variables are still weird. That's pre-existing, it's really
the language's fault and this patch keeps it weird. We should deprecate
volatile [5].
- As noted above, padding isn't fully handled yet.
I don't think these caveats make the patch untenable because they can be
addressed separately.
Should this be on by default? Maybe, in some circumstances. It's a conversation
we can have when we've tried it out sufficiently, and we're confident that we've
eliminated enough of the overheads that most codebases would want to opt-in.
Let's keep our precious undefined behavior until that point in time.
How do I use it:
1. On the command-line:
-ftrivial-auto-var-init=uninitialized (the default)
-ftrivial-auto-var-init=pattern
-ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang
2. Using an attribute:
int dont_initialize_me __attribute((uninitialized));
[0]: https://users.elis.ugent.be/~jsartor/researchDocs/OOPSLA2011Zero-submit.pdf
[1]: https://twitter.com/JosephBialek/status/1062774315098112001
[2]: https://outflux.net/slides/2018/lss/danger.pdf
[3]: https://gcc.gnu.org/ml/gcc-patches/2014-06/msg00615.html
[4]: 776a0955ef
[5]: http://wg21.link/p1152
I've also posted an RFC to cfe-dev: http://lists.llvm.org/pipermail/cfe-dev/2018-November/060172.html
<rdar://problem/39131435>
Reviewers: pcc, kcc, rsmith
Subscribers: JDevlieghere, jkorous, dexonsmith, cfe-commits
Differential Revision: https://reviews.llvm.org/D54604
llvm-svn: 349442
is not specified
The -target option allows the user to specify the build target using LLVM
triple. The triple includes the arch, and so the -arch option is redundant.
This should work just as well without the -arch. However, the driver has a bug
in which it doesn't target the "Cyclone" CPU for darwin if -target is used
without -arch. This commit fixes this issue.
rdar://46743182
Differential Revision: https://reviews.llvm.org/D55731
llvm-svn: 349382
Implement options in clang to enable recording the driver command-line
in an ELF section.
Implement a new special named metadata, llvm.commandline, to support
frontends embedding their command-line options in IR/ASM/ELF.
This differs from the GCC implementation in some key ways:
* In GCC there is only one command-line possible per compilation-unit,
in LLVM it mirrors llvm.ident and multiple are allowed.
* In GCC individual options are separated by NULL bytes, in LLVM entire
command-lines are separated by NULL bytes. The advantage of the GCC
approach is to clearly delineate options in the face of embedded
spaces. The advantage of the LLVM approach is to support merging
multiple command-lines unambiguously, while handling embedded spaces
with escaping.
Differential Revision: https://reviews.llvm.org/D54487
Clang Differential Revision: https://reviews.llvm.org/D54489
llvm-svn: 349155