midl invokes the compiler on .idl files with /E. Before this change, we
would treat unrecognized inputs as object files. Now we pre-process to
stdout as expected. I checked that MSVC defines __cplusplus when invoked
this way, so treating the input as C++ seems like the right thing to do.
After this change, I was able to run midl like this with clang-cl:
$ midl -cpp_cmd clang-cl.exe foo.idl
Things worked for the example IDL file in the Microsoft documentation,
but beyond that, I don't know if this will work well.
Fixes PR40140
llvm-svn: 350072
Gentoo supports combining clang toolchain with GNU binutils, and many
users actually do that. As -faddrsig is not supported by GNU strip,
this results in a lot of warnings. Disable it by default and let users
enable it explicitly if they want it; with the intent of reevaluating
when the underlying feature becomes standarized.
See also: https://bugs.gentoo.org/667854
Differential Revision: https://reviews.llvm.org/D56047
llvm-svn: 350028
Add support for distinguishing plain Gentoo distribution, and a unit
test for it. This is going to be used to introduce distro-specific
customizations in the driver code; most notably, it is going to be used
to disable -faddrsig.
Differential Revision: https://reviews.llvm.org/D56024
llvm-svn: 350027
If an -analyzer-config is passed through -Xanalyzer, it is not found while
looking for -Xclang.
Additionally, don't emit -analyzer-config-compatibility-mode for *every*
-analyzer-config flag we encounter; one is enough.
https://reviews.llvm.org/D55823
rdar://problem/46504165
llvm-svn: 349866
Since r348038 we emit an error every time an -analyzer-config option is not
found. The driver, however, suppresses this error with another flag,
-analyzer-config-compatibility-mode, so backwards compatibility is maintained,
while analyzer developers still enjoy the new typo-free experience.
The backwards compatibility turns out to be still broken when the -analyze
action is not specified; it is still possible to specify -analyzer-config
in that case. This should be fixed now.
Patch by Kristóf Umann!
Differential Revision: https://reviews.llvm.org/D55823
rdar://problem/46504165
llvm-svn: 349824
Replace multiple comparisons of getOS() value with FreeBSD, NetBSD,
OpenBSD and DragonFly with matching isOS*BSD() methods. This should
improve the consistency of coding style without changing the behavior.
Direct getOS() comparisons were left whenever used in switch or switch-
like context.
Differential Revision: https://reviews.llvm.org/D55916
llvm-svn: 349752
NetBSD intends to support only reentrant interfaces in interceptors.
When -lpthread is used without _REENTRANT defined, things are
not guaranteed to work.
This is especially important for <stdio.h> and sanitization of
interfaces around FILE. Some APIs have alternative modes depending
on the _REENTRANT definition, and NetBSD intends to support sanitization
of the _REENTRANT ones.
Differential Revision: https://reviews.llvm.org/D55654
llvm-svn: 349650
Avoid passing -faddrsig by default on NetBSD. This platform is still
using old GNU binutils that crashes on executables containing those
sections.
Differential Revision: https://reviews.llvm.org/D55828
llvm-svn: 349647
NFC for targets other than PS4.
Respect -nostdlib and -nodefaultlibs when enabling asan or ubsan.
Differential Revision: https://reviews.llvm.org/D55712
llvm-svn: 349508
For targets where SEH exceptions are used by default (on MinGW,
only x86_64 so far), -munwind-tables are added automatically. If
-fseh-exeptions is enabled on a target where SEH exeptions are
availble but not enabled by default yet (aarch64), we need to
pass -munwind-tables if -fseh-exceptions was specified.
Differential Revision: https://reviews.llvm.org/D55749
llvm-svn: 349452
Summary:
Add an option to initialize automatic variables with either a pattern or with
zeroes. The default is still that automatic variables are uninitialized. Also
add attributes to request uninitialized on a per-variable basis, mainly to disable
initialization of large stack arrays when deemed too expensive.
This isn't meant to change the semantics of C and C++. Rather, it's meant to be
a last-resort when programmers inadvertently have some undefined behavior in
their code. This patch aims to make undefined behavior hurt less, which
security-minded people will be very happy about. Notably, this means that
there's no inadvertent information leak when:
- The compiler re-uses stack slots, and a value is used uninitialized.
- The compiler re-uses a register, and a value is used uninitialized.
- Stack structs / arrays / unions with padding are copied.
This patch only addresses stack and register information leaks. There's many
more infoleaks that we could address, and much more undefined behavior that
could be tamed. Let's keep this patch focused, and I'm happy to address related
issues elsewhere.
To keep the patch simple, only some `undef` is removed for now, see
`replaceUndef`. The padding-related infoleaks are therefore not all gone yet.
This will be addressed in a follow-up, mainly because addressing padding-related
leaks should be a stand-alone option which is implied by variable
initialization.
There are three options when it comes to automatic variable initialization:
0. Uninitialized
This is C and C++'s default. It's not changing. Depending on code
generation, a programmer who runs into undefined behavior by using an
uninialized automatic variable may observe any previous value (including
program secrets), or any value which the compiler saw fit to materialize on
the stack or in a register (this could be to synthesize an immediate, to
refer to code or data locations, to generate cookies, etc).
1. Pattern initialization
This is the recommended initialization approach. Pattern initialization's
goal is to initialize automatic variables with values which will likely
transform logic bugs into crashes down the line, are easily recognizable in
a crash dump, without being values which programmers can rely on for useful
program semantics. At the same time, pattern initialization tries to
generate code which will optimize well. You'll find the following details in
`patternFor`:
- Integers are initialized with repeated 0xAA bytes (infinite scream).
- Vectors of integers are also initialized with infinite scream.
- Pointers are initialized with infinite scream on 64-bit platforms because
it's an unmappable pointer value on architectures I'm aware of. Pointers
are initialize to 0x000000AA (small scream) on 32-bit platforms because
32-bit platforms don't consistently offer unmappable pages. When they do
it's usually the zero page. As people try this out, I expect that we'll
want to allow different platforms to customize this, let's do so later.
- Vectors of pointers are initialized the same way pointers are.
- Floating point values and vectors are initialized with a negative quiet
NaN with repeated 0xFF payload (e.g. 0xffffffff and 0xffffffffffffffff).
NaNs are nice (here, anways) because they propagate on arithmetic, making
it more likely that entire computations become NaN when a single
uninitialized value sneaks in.
- Arrays are initialized to their homogeneous elements' initialization
value, repeated. Stack-based Variable-Length Arrays (VLAs) are
runtime-initialized to the allocated size (no effort is made for negative
size, but zero-sized VLAs are untouched even if technically undefined).
- Structs are initialized to their heterogeneous element's initialization
values. Zero-size structs are initialized as 0xAA since they're allocated
a single byte.
- Unions are initialized using the initialization for the largest member of
the union.
Expect the values used for pattern initialization to change over time, as we
refine heuristics (both for performance and security). The goal is truly to
avoid injecting semantics into undefined behavior, and we should be
comfortable changing these values when there's a worthwhile point in doing
so.
Why so much infinite scream? Repeated byte patterns tend to be easy to
synthesize on most architectures, and otherwise memset is usually very
efficient. For values which aren't entirely repeated byte patterns, LLVM
will often generate code which does memset + a few stores.
2. Zero initialization
Zero initialize all values. This has the unfortunate side-effect of
providing semantics to otherwise undefined behavior, programs therefore
might start to rely on this behavior, and that's sad. However, some
programmers believe that pattern initialization is too expensive for them,
and data might show that they're right. The only way to make these
programmers wrong is to offer zero-initialization as an option, figure out
where they are right, and optimize the compiler into submission. Until the
compiler provides acceptable performance for all security-minded code, zero
initialization is a useful (if blunt) tool.
I've been asked for a fourth initialization option: user-provided byte value.
This might be useful, and can easily be added later.
Why is an out-of band initialization mecanism desired? We could instead use
-Wuninitialized! Indeed we could, but then we're forcing the programmer to
provide semantics for something which doesn't actually have any (it's
uninitialized!). It's then unclear whether `int derp = 0;` lends meaning to `0`,
or whether it's just there to shut that warning up. It's also way easier to use
a compiler flag than it is to manually and intelligently initialize all values
in a program.
Why not just rely on static analysis? Because it cannot reason about all dynamic
code paths effectively, and it has false positives. It's a great tool, could get
even better, but it's simply incapable of catching all uses of uninitialized
values.
Why not just rely on memory sanitizer? Because it's not universally available,
has a 3x performance cost, and shouldn't be deployed in production. Again, it's
a great tool, it'll find the dynamic uses of uninitialized variables that your
test coverage hits, but it won't find the ones that you encounter in production.
What's the performance like? Not too bad! Previous publications [0] have cited
2.7 to 4.5% averages. We've commmitted a few patches over the last few months to
address specific regressions, both in code size and performance. In all cases,
the optimizations are generally useful, but variable initialization benefits
from them a lot more than regular code does. We've got a handful of other
optimizations in mind, but the code is in good enough shape and has found enough
latent issues that it's a good time to get the change reviewed, checked in, and
have others kick the tires. We'll continue reducing overheads as we try this out
on diverse codebases.
Is it a good idea? Security-minded folks think so, and apparently so does the
Microsoft Visual Studio team [1] who say "Between 2017 and mid 2018, this
feature would have killed 49 MSRC cases that involved uninitialized struct data
leaking across a trust boundary. It would have also mitigated a number of bugs
involving uninitialized struct data being used directly.". They seem to use pure
zero initialization, and claim to have taken the overheads down to within noise.
Don't just trust Microsoft though, here's another relevant person asking for
this [2]. It's been proposed for GCC [3] and LLVM [4] before.
What are the caveats? A few!
- Variables declared in unreachable code, and used later, aren't initialized.
This goto, Duff's device, other objectionable uses of switch. This should
instead be a hard-error in any serious codebase.
- Volatile stack variables are still weird. That's pre-existing, it's really
the language's fault and this patch keeps it weird. We should deprecate
volatile [5].
- As noted above, padding isn't fully handled yet.
I don't think these caveats make the patch untenable because they can be
addressed separately.
Should this be on by default? Maybe, in some circumstances. It's a conversation
we can have when we've tried it out sufficiently, and we're confident that we've
eliminated enough of the overheads that most codebases would want to opt-in.
Let's keep our precious undefined behavior until that point in time.
How do I use it:
1. On the command-line:
-ftrivial-auto-var-init=uninitialized (the default)
-ftrivial-auto-var-init=pattern
-ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang
2. Using an attribute:
int dont_initialize_me __attribute((uninitialized));
[0]: https://users.elis.ugent.be/~jsartor/researchDocs/OOPSLA2011Zero-submit.pdf
[1]: https://twitter.com/JosephBialek/status/1062774315098112001
[2]: https://outflux.net/slides/2018/lss/danger.pdf
[3]: https://gcc.gnu.org/ml/gcc-patches/2014-06/msg00615.html
[4]: 776a0955ef
[5]: http://wg21.link/p1152
I've also posted an RFC to cfe-dev: http://lists.llvm.org/pipermail/cfe-dev/2018-November/060172.html
<rdar://problem/39131435>
Reviewers: pcc, kcc, rsmith
Subscribers: JDevlieghere, jkorous, dexonsmith, cfe-commits
Differential Revision: https://reviews.llvm.org/D54604
llvm-svn: 349442
Summary:
The msvc exception specifier for noexcept function types has changed
from the prior default of "Z" to "_E" if the function cannot throw when
compiling with /std:C++17.
Patch by Zachary Henkel!
Reviewers: zturner, rnk
Reviewed By: rnk
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D55685
llvm-svn: 349414
is not specified
The -target option allows the user to specify the build target using LLVM
triple. The triple includes the arch, and so the -arch option is redundant.
This should work just as well without the -arch. However, the driver has a bug
in which it doesn't target the "Cyclone" CPU for darwin if -target is used
without -arch. This commit fixes this issue.
rdar://46743182
Differential Revision: https://reviews.llvm.org/D55731
llvm-svn: 349382
On Darwin, using '-arch x86_64h' would always override the option passed
through '-march'.
This patch allows users to use '-march' with x86_64h, while keeping the
default to 'core-avx2'
Differential Revision: https://reviews.llvm.org/D55775
llvm-svn: 349381
pass in the -target-sdk-version to the compiler and backend
This commit adds support for reading the SDKSettings.json file in the Darwin
driver. This file is used by the driver to determine the SDK's version, and it
uses that information to pass it down to the compiler using the new
-target-sdk-version= option. This option is then used to set the appropriate
SDK Version module metadata introduced in r349119.
Note: I had to adjust the two ast tests as the SDKROOT environment variable
on macOS caused SDK version to be picked up for the compilation of source file
but not the AST.
rdar://45774000
Differential Revision: https://reviews.llvm.org/D55673
llvm-svn: 349380
Implement options in clang to enable recording the driver command-line
in an ELF section.
Implement a new special named metadata, llvm.commandline, to support
frontends embedding their command-line options in IR/ASM/ELF.
This differs from the GCC implementation in some key ways:
* In GCC there is only one command-line possible per compilation-unit,
in LLVM it mirrors llvm.ident and multiple are allowed.
* In GCC individual options are separated by NULL bytes, in LLVM entire
command-lines are separated by NULL bytes. The advantage of the GCC
approach is to clearly delineate options in the face of embedded
spaces. The advantage of the LLVM approach is to support merging
multiple command-lines unambiguously, while handling embedded spaces
with escaping.
Differential Revision: https://reviews.llvm.org/D54487
Clang Differential Revision: https://reviews.llvm.org/D54489
llvm-svn: 349155
Summary:
Added support for the -gline-directives-only option + fixed logic of the
debug info for CUDA devices. If optimization level is O0, then options
--[no-]cuda-noopt-device-debug do not affect the debug info level. If
the optimization level is >O0, debug info options are used +
--no-cuda-noopt-device-debug is used or no --cuda-noopt-device-debug is
used, the optimization level for the device code is kept and the
emission of the debug directives is used.
If the opt level is > O0, debug info is requested +
--cuda-noopt-device-debug option is used, the optimization is disabled
for the device code + required debug info is emitted.
Reviewers: tra, echristo
Subscribers: aprantl, guansong, JDevlieghere, cfe-commits
Differential Revision: https://reviews.llvm.org/D51554
llvm-svn: 348930
It is faster to directly call the ObjC runtime for methods such as alloc/allocWithZone instead of sending a message to those functions.
This patch adds support for converting messages to alloc/allocWithZone to their equivalent runtime calls.
Tests included for the positive case of applying this transformation, negative tests that we ensure we only convert "alloc" to objc_alloc, not "alloc2", and also a driver test to ensure we enable this only for supported runtime versions.
Reviewed By: rjmccall
https://reviews.llvm.org/D55349
llvm-svn: 348687
The flag -fdebug-compilation-dir is useful to make generated .o files
independent of the path of the build directory, without making the compile
command-line dependent on the path of the build directory, like
-fdebug-prefix-map requires. This change makes it so that the driver can
forward the flag to -cc1as, like it already can for -cc1. We might want to
consider making -fdebug-compilation-dir a driver flag in a follow-up.
(Since -fdebug-compilation-dir defaults to PWD, it's already possible to get
this effect by setting PWD, but explicit compiler flags are better than env
vars, because e.g. ninja tracks command lines and reruns commands that change.)
Somewhat related to PR14625.
Differential Revision: https://reviews.llvm.org/D55377
llvm-svn: 348515
Summary:
The intention is to make the tools replaying compilations from 'compile_commands.json'
(clang-tidy, clangd, etc.) find the same standard library as the original compiler
specified in 'compile_commands.json'.
Previously, the library detection logic was in the frontend (InitHeaderSearch.cpp) and relied
on the value of resource dir as an approximation of the compiler install dir. The new logic
uses the actual compiler install dir and is performed in the driver. This is consistent with
the C++ standard library detection on other platforms and allows to override the resource dir
in the tools using the compile_commands.json without altering the
standard library detection mechanism. The tools have to override the resource dir to make sure
they use a consistent version of the builtin headers.
There is still logic in InitHeaderSearch that attemps to add the absolute includes for the
the C++ standard library, so we keep passing the -stdlib=libc++ from the driver to the frontend
via cc1 args to avoid breaking that. In the long run, we should move this logic to the driver too,
but it could potentially break the library detection on other systems, so we don't tackle it in this
patch to keep its scope manageable.
This is a second attempt to fix the issue, first one was commited in r346652 and reverted in r346675.
The original fix relied on an ad-hoc propagation (bypassing the cc1 flags) of the install dir from the
driver to the frontend's HeaderSearchOptions. Unsurpisingly, the propagation was incomplete, it broke
the libc++ detection in clang itself, which caused LLDB tests to break.
The LLDB tests pass with new fix.
Reviewers: JDevlieghere, arphaman, EricWF
Reviewed By: arphaman
Subscribers: mclow.lists, ldionne, dexonsmith, ioeric, christof, kadircet, cfe-commits
Differential Revision: https://reviews.llvm.org/D54630
llvm-svn: 348365
This is an updated version of the D54576, which was reverted.
Problem was that SplitDebugName calls the InputInfo::getFilename
which asserts if InputInfo given is not of type Filename:
const char *getFilename() const {
assert(isFilename() && "Invalid accessor.");
return Data.Filename;
}
At the same time at that point, it can be of type Nothing and
we need to use getBaseInput(), like original code did.
Differential revision: https://reviews.llvm.org/D55006
llvm-svn: 348352
When debugging a boost build with a modified
version of Clang, I discovered that the PTH implementation
stores TokenKind in 8 bits. However, we currently have 368
TokenKinds.
The result is that the value gets truncated and the wrong token
gets picked up when including PTH files. It seems that this will
go wrong every time someone uses a token that uses the 9th bit.
Upon asking on IRC, it was brought up that this was a highly
experimental features that was considered a failure. I discovered
via googling that BoostBuild (mostly Boost.Math) is the only user of
this
feature, using the CC1 flag directly. I believe that this can be
transferred over to normal PCH with minimal effort:
https://github.com/boostorg/build/issues/367
Based on advice on IRC and research showing that this is a nearly
completely unused feature, this patch removes it entirely.
Note: I considered leaving the build-flags in place and making them
emit an error/warning, however since I've basically identified and
warned the only user, it seemed better to just remove them.
Differential Revision: https://reviews.llvm.org/D54547
Change-Id: If32744275ef1f585357bd6c1c813d96973c4d8d9
llvm-svn: 348266
When the global new and delete operators aren't declared, Clang
provides and implicit declaration, but this declaration currently
always uses the default visibility. This is a problem when the
C++ library itself is being built with non-default visibility because
the implicit declaration will force the new and delete operators to
have the default visibility unlike the rest of the library.
The existing workaround is to use assembly to enforce the visiblity:
https://fuchsia.googlesource.com/zircon/+/master/system/ulib/zxcpp/new.cpp#108
but that solution is not always available, e.g. in the case of of
libFuzzer which is using an internal version of libc++ that's also built
with -fvisibility=hidden where the existing behavior is causing issues.
This change introduces a new option -fvisibility-global-new-delete-hidden
which makes the implicit declaration of the global new and delete
operators hidden.
Differential Revision: https://reviews.llvm.org/D53787
llvm-svn: 348234
Make sure that symbols needed to implement runtime support for gcov are
exported when using an export list on Darwin.
Without the clang driver exporting these symbols, the linker hides them,
resulting in tapi verification failures.
rdar://45944768
Differential Revision: https://reviews.llvm.org/D55151
llvm-svn: 348187
Summary:
This patch passes an option '-z max-page-size=4096' to lld through clang driver.
This is for Android on Aarch64 target.
The lld default page size is too large for Aarch64, which produces larger .so files and images for arm64 device targets.
In this patch we set default page size to 4KB for Android Aarch64 targets instead.
Reviewers: srhines, danalbert, ruiu, chh, peter.smith
Reviewed By: srhines
Subscribers: javed.absar, kristof.beyls, cfe-commits, george.burgess.iv, llozano
Differential Revision: https://reviews.llvm.org/D55029
llvm-svn: 347897
This adds Hurd toolchain support to Clang's driver in addition
to handling translating the triple from Hurd-compatible form to
the actual triple registered in LLVM.
(Phabricator was stripping the empty files from the patch so I
manually created them)
Patch by sthibaul (Samuel Thibault)
Differential Revision: https://reviews.llvm.org/D54379
llvm-svn: 347833
Summary:
Linux toolchain accidentally added "-u__llvm_runtime_variable" when "-fprofile-arcs -ftest-coverage", this is not added when "--coverage" option is used.
Using "-u__llvm_runtime_variable" generates an empty default.profraw file while an application built with "-fprofile-arcs -ftest-coverage" is running.
Reviewers: calixte, marco-c, sylvestre.ledru
Reviewed By: marco-c
Subscribers: vsk, cfe-commits
Differential Revision: https://reviews.llvm.org/D54195
llvm-svn: 347677
This reverts commit r347035 as it introduced assertion failures under
certain conditions. More information can be found here:
https://reviews.llvm.org/rL347035
llvm-svn: 347676
Summary:
-mno-speculative-load-hardening isn't a cc1 option, therefore,
before this change:
clang -mno-speculative-load-hardening hello.cpp
would have the following error:
error: unknown argument: '-mno-speculative-load-hardening'
This change will only ever forward -mspeculative-load-hardening
which is a CC1 option based on which flag was passed to clang.
Also added a test that uses this option that fails if an error like the
above is ever thrown.
Thank you ericwf for help debugging and fixing this error.
Reviewers: chandlerc, EricWF
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D54763
llvm-svn: 347582
This reverts commit r347413: older versions of ld.gold that are used
by Android don't support --push/pop-state which broke sanitizer bots.
llvm-svn: 347430
Sanitizer runtime link deps handling passes --no-as-needed because of
PR15823, but it never undoes it and this flag may affect other libraries
that come later on the link line. To avoid this, wrap Sanitizer link
deps in --push/pop-state.
Differential Revision: https://reviews.llvm.org/D54805
llvm-svn: 347413
Even though these deps weren't needed, this makes Fuchsia driver
better match other drivers, and it may be necessary when trying to
use different C libraries on Fuchsia.
Differential Revision: https://reviews.llvm.org/D54741
llvm-svn: 347378
Because SCS relies on system-provided runtime support, we can use it
together with any other sanitizer simply by linking the runtime for
the other sanitizer.
Differential Revision: https://reviews.llvm.org/D54735
llvm-svn: 347282