target features that the caller function doesn't provide. This matches
the existing backend failure to inline functions that don't have
matching target features - and diagnoses earlier in the case of
always_inline.
Fix up a few test cases that were, in fact, invalid if you tried
to generate code from the backend with the specified target features
and add a couple of tests to illustrate what's going on.
This should fix PR25246.
llvm-svn: 252834
Summary: It breaks the build for the ASTMatchers
Subscribers: klimek, cfe-commits
Differential Revision: http://reviews.llvm.org/D13893
llvm-svn: 250827
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
The type of a member pointer is incomplete if it has no inheritance
model. This lets us reuse more general logic already embedded in clang.
llvm-svn: 247346
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
This patch depends on r246688 (D12341).
The goal is to make LLVM generate different code for these functions for a target that
has cheap branches (see PR23827 for more details):
int foo();
int normal(int x, int y, int z) {
if (x != 0 && y != 0) return foo();
return 1;
}
int crazy(int x, int y) {
if (__builtin_unpredictable(x != 0 && y != 0)) return foo();
return 1;
}
Differential Revision: http://reviews.llvm.org/D12458
llvm-svn: 246699
The new EH instructions make it possible for LLVM to generate .xdata
tables that the MSVC personality routines will be happy about. Because
this is experimental, hide it behind a -cc1 flag (-fnew-ms-eh).
Differential Revision: http://reviews.llvm.org/D11405
llvm-svn: 243767
This reverts commit r241244, but restricts SEH support to Win64.
This way, Chromium builds will still fall back on TUs with SEH, and
Clang developers can work on this incrementally upstream while patching
this small predicate locally. It'll also make it easier to review small
fixes.
llvm-svn: 241533
This is needed to use clang's command line option "-ftrap-function" for LTO and
enable changing the trap function name on a per-call-site basis.
rdar://problem/21225723
Differential Revision: http://reviews.llvm.org/D10831
llvm-svn: 241306
This re-lands r236052 and adds support for __exception_code().
In 32-bit SEH, the exception code is not available in eax. It is only
available in the filter function, and now we arrange to load it and
store it into an escaped variable in the parent frame.
As a consequence, we have to disable the "catch i8* null" optimization
on 32-bit and always generate a filter function. We can re-enable the
optimization if we detect an __except block that doesn't use the
exception code, but this probably isn't worth optimizing.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D10852
llvm-svn: 241171
This patch adds initial support for the -fsanitize=kernel-address flag to Clang.
Right now it's quite restricted: only out-of-line instrumentation is supported, globals are not instrumented, some GCC kasan flags are not supported.
Using this patch I am able to build and boot the KASan tree with LLVMLinux patches from github.com/ramosian-glider/kasan/tree/kasan_llvmlinux.
To disable KASan instrumentation for a certain function attribute((no_sanitize("kernel-address"))) can be used.
llvm-svn: 240131
This patch adds the -fsanitize=safe-stack command line argument for clang,
which enables the Safe Stack protection (see http://reviews.llvm.org/D6094
for the detailed description of the Safe Stack).
This patch is our implementation of the safe stack on top of Clang. The
patches make the following changes:
- Add -fsanitize=safe-stack and -fno-sanitize=safe-stack options to clang
to control safe stack usage (the safe stack is disabled by default).
- Add __attribute__((no_sanitize("safe-stack"))) attribute to clang that can be
used to disable the safe stack for individual functions even when enabled
globally.
Original patch by Volodymyr Kuznetsov and others at the Dependable Systems
Lab at EPFL; updates and upstreaming by myself.
Differential Revision: http://reviews.llvm.org/D6095
llvm-svn: 239762
- added -fcuda-include-gpubinary option to incorporate results of
device-side compilation into host-side one.
- generate code to register GPU binaries and associated kernels
with CUDA runtime and clean-up on exit.
- added test case for init/deinit code generation.
Differential Revision: http://reviews.llvm.org/D9507
llvm-svn: 236765
The fact that PGO has a say in how these branch weights are determined
isn't interesting to most of CodeGen, so it makes more sense for this
API to be accessible via CodeGenFunction rather than CodeGenPGO.
llvm-svn: 236380
This is just the clang-side of 32-bit SEH. LLVM still needs work, and it
will determinstically fail to compile until it's feature complete.
On x86, all outlined handlers have no parameters, but they do implicitly
take the EBP value passed in and use it to address locals of the parent
frame. We model this with llvm.frameaddress(1).
This works (mostly), but __finally block inlining can break it. For now,
we apply the 'noinline' attribute. If we really want to inline __finally
blocks on 32-bit x86, we should teach the inliner how to untangle
frameescape and framerecover.
Promote the error diagnostic from codegen to sema. It now rejects SEH on
non-Windows platforms. LLVM doesn't implement SEH on non-x86 Windows
platforms, but there's nothing preventing it.
llvm-svn: 236052
The RegionCounter type does a lot of legwork, but most of it is only
meaningful within the implementation of CodeGenPGO. The uses elsewhere
in CodeGen generally just want to increment or read counters, so do
that directly.
llvm-svn: 235664
This reverts commit r234700. It turns out that the lifetime markers
were not the cause of Chromium failing but a bug which was uncovered by
optimizations exposed by the markers.
llvm-svn: 235553
Even though these symbols are in a comdat group, the Microsoft linker
really wants them to have internal linkage.
I'm planning to tweak the mangling in a follow-up change. This is a
straight revert with a 1-line fix.
llvm-svn: 234613
Now that TailRecursionElimination has been fixed with r222354, the
threshold on size for lifetime marker insertion can be removed. This
only affects named temporary though, as the patch for unnamed temporaries
is still in progress.
My previous commit (r222993) was not handling debuginfo correctly, but
this could only be seen with some asan tests. Basically, lifetime markers
are just instrumentation for the compiler's usage and should not affect
debug information; however, the cleanup infrastructure was assuming it
contained only destructors, i.e. actual code to be executed, and was
setting the breakpoint for the end of the function to the closing '}', and
not the return statement, in order to show some destructors have been
called when leaving the function. This is wrong when the cleanups are only
lifetime markers, and this is now fixed.
llvm-svn: 234581
WinEHPrepare was going to have to pattern match the control flow merge
and split that the old lowering used, and that wasn't really feasible.
Now we can teach WinEHPrepare to pattern match this, which is much
simpler:
%fp = call i8* @llvm.frameaddress(i32 0)
call void @func(iN [01], i8* %fp)
This prototype happens to match the prototype used by the Win64 SEH
personality function, so this is really simple.
llvm-svn: 234532
The driver currently accepts but ignores the -freciprocal-math flag.
This patch passes the flag through and enables 'arcp' fast-math-flag
generation in IR.
Note that this change does not actually enable the optimization for
any target. The reassociation optimization that this flag specifies
was implemented by http://reviews.llvm.org/D6334 :
http://llvm.org/viewvc/llvm-project?view=revision&revision=222510
Because the optimization is done in the backend rather than IR,
the backend must be modified to understand instruction-level
fast-math-flags or a new function-level attribute must be created.
Also note that -freciprocal-math is independent of any target-specific
usage of reciprocal estimate hardware instructions. That requires
its own flag ('-mrecip').
https://llvm.org/bugs/show_bug.cgi?id=20912
llvm-svn: 234493
The test should be fixed. It was failing in NDEBUG builds due to a
missing '*' character in a regex. In asserts builds, the pattern matched
a single digit value, which became a double digit value in NDEBUG
builds. Go figure.
This reverts commit r234261.
llvm-svn: 234447
While capturing filters aren't very common, we'd like to outline
__finally blocks in the frontend to simplify -O0 EH preparation and
reduce code size. Finally blocks are usually have captures, and this is
the first step towards that.
Currently we don't support capturing 'this' or VLAs.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D8825
llvm-svn: 234261
There are no widely deployed standard libraries providing sized
deallocation functions, so we have to punt and ask the user if they want
us to use sized deallocation. In the future, when such libraries are
deployed, we can teach the driver to detect them and enable this
feature.
N3536 claimed that a weak thunk from sized to unsized deallocation could
be emitted to avoid breaking backwards compatibility with standard
libraries not providing sized deallocation. However, this approach and
other variations don't work in practice.
With the weak function approach, the thunk has to have default
visibility in order to ensure that it is overridden by other DSOs
providing sized deallocation. Weak, default visibility symbols are
particularly expensive on MachO, so John McCall was considering
disabling this feature by default on Darwin. It also changes behavior
ELF linking behavior, causing certain otherwise unreferenced object
files from an archive to be pulled into the link.
Our second approach was to use an extern_weak function declaration and
do an inline conditional branch at the deletion call site. This doesn't
work because extern_weak only works on MachO if you have some archive
providing the default value of the extern_weak symbol. Arranging to
provide such an archive has the same challenges as providing the symbol
in the standard library. Not to mention that extern_weak doesn't really
work on COFF.
Reviewers: rsmith, rjmccall
Differential Revision: http://reviews.llvm.org/D8467
llvm-svn: 232788