Create a new TargetCodeGenInfo for Windows on ARM to permit annotating the
functions with stack-probe-size (for /Gs and -mstack-probe-support) for
generating the stack probe necessary for Windows targets. This will be used by
the backend when lowering the frame to generate the stack probe appropriately.
llvm-svn: 227641
On targets which use the MSVCRT, setjmp is a macro which expands to
_setjmp or _setjmpex.
_setjmp and _setjmpex have a secret, hidden argument which is not listed
in the function prototype on X64 and WoA. This hidden argument always
seems to be the frame pointer.
_setjmpex isn't used on X86, _setjmp is magically replaced with a call
to _setjmp3. The second argument is zero for 'normal' setjmp/longjmp
pairs, otherwise it is a count of additional variadic arguments. This
is used when setjmp appears inside of a try or __try.
It is not safe to use a pointer to setjmp because _setjmp, _setjmpex and
_setmp3 are not compatible with setjmp.
llvm-svn: 227426
Summary:
It was used for interoperability with PNaCl's calling conventions, but
it's no longer needed.
Also Remove NaCl*ABIInfo which just existed to delegate to either the portable
or native ABIInfo, and remove checkCallingConvention which was now a no-op
override.
Reviewers: jvoung
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D7206
llvm-svn: 227362
The backend won't run LowerExpect on -O0. In a debug LTO build, this results in llvm.expect intrinsics being in the LTO IR which doesn't know how to optimize them.
Thanks to Chandler for the suggestion and review.
Differential revision: http://reviews.llvm.org/D7183
llvm-svn: 227135
The lowering looks a lot like normal EH lowering, with the exception
that the exceptions are caught by executing filter expression code
instead of matching typeinfo globals. The filter expressions are
outlined into functions which are used in landingpad clauses where
typeinfo would normally go.
Major aspects that still need work:
- Non-call exceptions in __try bodies won't work yet. The plan is to
outline the __try block in the frontend to keep things simple.
- Filter expressions cannot use local variables until capturing is
implemented.
- __finally blocks will not run after exceptions. Fixing this requires
work in the LLVM SEH preparation pass.
The IR lowering looks like this:
// C code:
bool safe_div(int n, int d, int *r) {
__try {
*r = normal_div(n, d);
} __except(_exception_code() == EXCEPTION_INT_DIVIDE_BY_ZERO) {
return false;
}
return true;
}
; LLVM IR:
define i32 @filter(i8* %e, i8* %fp) {
%ehptrs = bitcast i8* %e to i32**
%ehrec = load i32** %ehptrs
%code = load i32* %ehrec
%matches = icmp eq i32 %code, i32 u0xC0000094
%matches.i32 = zext i1 %matches to i32
ret i32 %matches.i32
}
define i1 zeroext @safe_div(i32 %n, i32 %d, i32* %r) {
%rr = invoke i32 @normal_div(i32 %n, i32 %d)
to label %normal unwind to label %lpad
normal:
store i32 %rr, i32* %r
ret i1 1
lpad:
%ehvals = landingpad {i8*, i32} personality i32 (...)* @__C_specific_handler
catch i8* bitcast (i32 (i8*, i8*)* @filter to i8*)
%ehptr = extractvalue {i8*, i32} %ehvals, i32 0
%sel = extractvalue {i8*, i32} %ehvals, i32 1
%filter_sel = call i32 @llvm.eh.seh.typeid.for(i8* bitcast (i32 (i8*, i8*)* @filter to i8*))
%matches = icmp eq i32 %sel, %filter_sel
br i1 %matches, label %eh.except, label %eh.resume
eh.except:
ret i1 false
eh.resume:
resume
}
Reviewers: rjmccall, rsmith, majnemer
Differential Revision: http://reviews.llvm.org/D5607
llvm-svn: 226760
It fails on Windows due to another temporary being emitted first, so the
LLVM internal renaming scheme gives out the name
__block_descriptor_tmp1.
llvm-svn: 226757
Currently we emit DeferredDeclsToEmit in reverse order. This patch changes that.
The advantages of the change are that
* The output order is a bit closer to the source order. The change to
test/CodeGenCXX/pod-member-memcpys.cpp is a good example.
* If we decide to deffer more, it will not cause as large changes in the
estcases as it would without this patch.
llvm-svn: 226751
Analogous to AVX2, these need to be implemented as macros to properly
propagate the immediate index operand.
Part of <rdar://problem/17688758>
llvm-svn: 226496
Summary:
This fixes MultiSource/Applications/lemon on big-endian N32 by correcting the
handling of the argument to wait(). glibc defines it as a transparent union of
void* and int*. Such unions are passed according to the rules of the first
member so the argument must be passed as if it were a void* (sign extended from
i32 to i64) and not as a union (shifted to the upper bits of an i64).
wait() already behaves correctly on big-endian O32 and N64 since the union is
already the same size as an argument slot.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6963
llvm-svn: 225981
These are implemented with __builtin_shufflevector just like AVX.
We have some tests on the LLVM side to assert that these shufflevectors do
indeed generate the corresponding unpck instruction.
Part of <rdar://problem/17688758>
llvm-svn: 225922
Summary:
The Mips ABI's treat pointers in the same way as integers. They are
sign-extended to 32-bit for O32, and 64-bit for N32/N64. This doesn't matter
for O32 and N64 where pointers are already the correct width but it does matter
for big-endian N32, where pointers are 32-bit and need promoting.
The caller side is already passing pointers correctly. This patch corrects the
callee.
Reviewers: vmedic, atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6812
llvm-svn: 225782
Introduce the following -fsanitize-recover flags:
- -fsanitize-recover=<list>: Enable recovery for selected checks or
group of checks. It is forbidden to explicitly list unrecoverable
sanitizers here (that is, "address", "unreachable", "return").
- -fno-sanitize-recover=<list>: Disable recovery for selected checks or
group of checks.
- -f(no-)?sanitize-recover is now a synonym for
-f(no-)?sanitize-recover=undefined,integer and will soon be deprecated.
These flags are parsed left to right, and mask of "recoverable"
sanitizer is updated accordingly, much like what we do for -fsanitize= flags.
-fsanitize= and -fsanitize-recover= flag families are independent.
CodeGen change: If there is a single UBSan handler function, responsible
for implementing multiple checks, which have different recoverable setting,
then we emit two handler calls instead of one:
the first one for the set of "unrecoverable" checks, another one - for
set of "recoverable" checks. If all checks implemented by a handler have the
same recoverability setting, then the generated code will be the same.
llvm-svn: 225719
The llvm IR until recently had no support for comdats. This was a problem when
targeting C++ on ELF/COFF as just using weak linkage would cause quite a bit of
dead bits to remain on the executable (unless -ffunction-sections,
-fdata-sections and --gc-sections were used).
To fix the problem, llvm's codegen will just assume that any weak or linkonce
that is not in an explicit comdat should be output in one with the same name as
the global.
This unfortunately breaks cases like pr19848 where a weak symbol is not
xpected to be part of any comdat.
Now that we have explicit comdats in the IR, we can finally get both cases
right.
This first patch just makes clang give explicit comdats to GlobalValues where
t is allowed to.
A followup patch to llvm will then stop implicitly producing comdats.
llvm-svn: 225705
Between this behavior and that fixed by r225083/r225000, I'll take the
latter over the former for now, but I'm immediately working on
understanding/addressing this behavior too.
(the fact that the code change in r225083 caused this change in behavior
is a bit troubling anyway - given that it looks & claims to be just a
preformance thing)
llvm-svn: 225086
This still lower to the same intrinsics as before.
This is preparation for bounds checking the immediate on the avx version of the builtin so we don't pass illegal immediates into the backend. Since SSE uses a smaller size immediate its not possible to bounds check when using a shared builtin. Rather than creating a clang specific builtin for the different immediate, I decided (after consulting with Chandler) that it was better to match gcc.
llvm-svn: 224879
The lit.cfg files only add .cpp to suffixes, so these tests used to never run,
oops. (Also tweak to of these tests in minor ways to make the actually pass.)
llvm-svn: 224718
Fixed assertion on type checking for arguments and parameters on function call if arguments are pointers to VLA
Differential Revision: http://reviews.llvm.org/D6655
llvm-svn: 224504
use clang -cc1 matching the front end and backend. Fix up a couple
of tests that were testing aapcs for arm-linux-gnu.
The test that removes the aapcs abi calling convention removes
them because the default triple matches what the backend uses
for the calling convention there and so it doesn't need to be
explicitly stated - see the code in TargetInfo.cpp.
llvm-svn: 224491
For MSVC compatibility, add the `__emit' builtin. This is used in the Windows
SDK headers, and must therefore be implemented as a builtin rather than an
intrinsic.
The `__emit' builtin provides a mechanism to emit a 16-bit opcode instruction
into the stream. The value must be a compile time constant expression. No
guarantees are made about the CPU and memory states after the execution of the
instruction.
Due to the unchecked nature of the builtin, only support this on Windows on ARM.
llvm-svn: 224438
Summary:
Because GCC doesn't use $1 for code generation, inline assembly code can use $1 without having to add it to the clobbers list.
LLVM, on the other hand, does not shy away from using $1, and this can cause conflicts with inline assembly which assumes GCC-like code generation.
A solution to this problem is to make Clang automatically clobber $1 for all MIPS inline assembly.
This is not the optimal solution, but it seems like a necessary compromise, for now.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6638
llvm-svn: 224428
Currently clang fires assertions on x86-64 on any atomic operations for long double operands. Patch fixes codegen for such operations.
Differential Revision: http://reviews.llvm.org/D6499
llvm-svn: 224230
having OptimizeNone remove them again, just don't add them in the
first place if the function already has OptimizeNone.
Note that MinSize can still appear due to attributes on different
declarations; a future patch will address that.
llvm-svn: 224047
Summary:
When -fsanitize-address-field-padding=1 is present
don't emit memcpy for copy constructor.
Thanks Nico for the extra test case.
Test Plan: regression tests
Reviewers: thakis, rsmith
Reviewed By: rsmith
Subscribers: rsmith, cfe-commits
Differential Revision: http://reviews.llvm.org/D6515
llvm-svn: 223563
is for each machine. Fix up darwin tests that were testing for
aapcs on armv7-ios when the actual ABI is apcs.
Should be no user visible change without -cc1.
llvm-svn: 223429
ARM ABI specifies that all the libcalls use soft FP ABI
(even hard FP binaries). These days clang emits _mulsc3 / _muldc3
calls with default (C) calling convention which would be translated
into AAPCS_VFP LLVM calling and thus the result of complex
multiplication will be bogus.
Introduce a way for a target to specify explicitly calling
convention for libcalls. Right now this is temporary correctness
fix. Ultimately, we'll end with intrinsic for complex
multiplication and all calling convention decisions for libcalls
will be put into backend.
llvm-svn: 223123
Now that LLVM can count the registers needed to implement AAPCS rules, we don't
need to duplicate that logic here. This means we can drop the explicit padding
and also use more natural types in many cases (e.g. "struct { float arr[3]; }"
used to end up as "[2 x double]" to avoid holes on the stack.
The one wrinkle is that AAPCS va_arg was also using the register counting
machinery. But the local replacement isn't too bad.
llvm-svn: 222904
Cygwin and MinGW fail to conform to the underlying system's structure passing
ABI. Make the check more precise to ensure that we correctly generate code for
the itanium environment.
llvm-svn: 222626
"global-init", "global-init-src" and "global-init-type" were originally
used to blacklist entities in ASan init-order checker. However, they
were never documented, and later were replaced by "=init" category.
Old blacklist entries should be converted as follows:
* global-init:foo -> global:foo=init
* global-init-src:bar -> src:bar=init
* global-init-type:baz -> type:baz=init
llvm-svn: 222401
This reverts commit r222144. Commit r222142 is being reverted due to
a spec2006/gcc execution-time regression.
Update mips-varargs test as well.
llvm-svn: 222397
Summary:
With this patch, passing a va_list to another function and reading 10 int's from
it works correctly on a big-endian target.
Based on a pair of patches by David Chisnall, one of which I've reworked
for the current trunk.
Reviewers: theraven, atanasyan
Reviewed By: theraven, atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6248
llvm-svn: 222339
Summary:
This distinguishes between -fpic and -fPIC now, with the additions in LLVM for
PIC level support.
Test Plan: No regressions
Reviewers: echristo, rafael
Reviewed By: rafael
Subscribers: rnk, emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D5400
llvm-svn: 222227
used inside blocks. It fixes a crash in naming code
for __func__ etc. when used in a block declared globally.
It also brings back old naming convention for
predefined expression which was broken. rdar://18961148
llvm-svn: 222065
Summary:
Ok, here is somewhat addition to D6217 aiming to preserve old darwin behavior wrt the typedefed types. The actual change to SemaChecking turned out to be pretty gross, in particular:
1. We need to extract the typedef'ed type for proper diagnostics
2. We need to walk over paren expressions as well
Reviewers: chandlerc, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6256
llvm-svn: 222044
VSX makes the "vector long long" and "vector double" types available.
This patch enables the vec_perm interface for these types. The same
builtin is generated regardless of the specified type, so no
additional work or testing is needed in the back end. Tests are added
to ensure this builtin is generated by the front end.
llvm-svn: 221988
This patch adds builtin support for xvdivdp and xvdivsp, along with a
new test case. The builtins are accessed using vec_div in altivec.h.
Builtins are listed (mostly) alphabetically there, so inserting these
changed the line numbers for deprecation warnings tested in
test/Headers/altivec-intrin.c.
There is a companion patch for LLVM.
llvm-svn: 221984
Summary:
Consider the following nifty 1 liner: (0 ? csqrtl(2.0f) : sqrtl(2.0f)). One can easily obtain such code from e.g. tgmath. Right now it produces an assertion because we fail to do the promotion real => _Complex real.
The case was properly handled previously (old handleOtherComplexFloatConversion routine), but was forgotten in the current version. This seems to be about fallout from r219557
Reviewers: chandlerc, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6217
llvm-svn: 221821
This patch enables the vec_vsx_ld and vec_vsx_st intrinsics for
PowerPC, which provide programmer access to the lxvd2x, lxvw4x,
stxvd2x, and stxvw4x instructions.
New code in altivec.h defines these in terms of new builtins, which
are themselves defined in BuiltinsPPC.def. The builtins are converted
to LLVM intrinsics in CGBuiltin.cpp. Additional code is added to
builtins-ppc-vsx.c to verify the correct generation of the intrinsics.
Note that I moved the other VSX builtins so all VSX builtins will be
alphabetical in their own section in BuiltinsPPC.def.
There is a companion patch for LLVM.
llvm-svn: 221768
Summary: If we've added poisoned paddings to a type do not emit memcpy for operator=.
Test Plan: regression tests.
Reviewers: majnemer, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6160
llvm-svn: 221739
Summary:
This change makes CodeGenFunction::EmitCheck() take several
conditions that needs to be checked (all of them need to be true),
together with sanitizer kinds these checks are for. This would allow
to split one call into UBSan runtime into several calls in case
different sanitizer kinds would have different recoverability
settings.
Tests should be fixed accordingly, I'm working on it.
Test Plan: regression test suite.
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6219
llvm-svn: 221716
Homogeneous aggregates on AAPCS_VFP ARM need to be passed *without* being
flattened (e.g. [2 x float] rather than "float, float") for various weird ABI
reasons. However, this isn't the case for anything else; further, we know at
the ABIArgInfo::getDirect callsites whether this flattening is allowed.
So, we can get more unified ARM code, with a simpler Clang, by just using that
knowledge directly.
llvm-svn: 221559