This patch adds Hardware Transaction Memory (HTM) support supported by ISA 2.07
(POWER8). The intrinsic support is based on GCC one [1], with both 'PowerPC HTM
Low Level Built-in Functions' and 'PowerPC HTM High Level Inline Functions'
implemented.
Along with builtins a new driver switch is added to enable/disable HTM
instruction support (-mhtm) and a header with common definitions (mostly to
parse the TFHAR register value). The HTM switch also sets a preprocessor builtin
HTM.
The HTM usage requires a recently newer kernel with PPC HTM enabled. Tested on
powerpc64 and powerpc64le.
This is send along a llvm patch to enabled the builtins and option switch.
[1]
https://gcc.gnu.org/onlinedocs/gcc/PowerPC-Hardware-Transactional-Memory-Built-in-Functions.html
Phabricator Review: http://reviews.llvm.org/D8248
llvm-svn: 233205
I'm about to commit a patch for:
http://reviews.llvm.org/D8567
That patch will break this one existing test case in Clang.
I'm not sure if this file is intending to create a Clang
dependency on the LLVM IR optimizer, but that's the
consequence of specifying -O3 on this test file.
My hope is to avoid buildbot rage by removing this check,
committing the LLVM patch, and then fixing this check.
I don't know how to make a simultaneous commit to Clang
and LLVM.
I will commit the correct CHECK line fix for this test
shortly.
llvm-svn: 233109
PS4 target recognizes the #pragma comment() syntax as in -fms-extensions, but
only handles the case of #pragma comment(lib). This patch adds a warning if any
other arguments are encountered.
This patch also refactors the code in ParsePragma.cpp a little bit to make it
more obvious that some codes are being shared between -fms-extensions and PS4.
llvm-svn: 233015
On AArch64, the -fallow-half-args-and-returns option is the default.
With it, the half type is considered legal (rather than the i16 used
normally for __fp16), but no operation is, except conversions and
load/stores and such.
The previous behavior was tantamount to saying LangOpts.NativeHalfType
was implied by LangOpts.HalfArgsAndReturns, which isn't true.
Instead, teach the various parts of CodeGen that already know about
half (using the intrinsics or not) about this weird in-between case,
where the "half" type is legal, but operations on it aren't.
This is a smaller intermediate step to the end-goal of removing the
intrinsic, always using "half", and letting the backend legalize.
Builds on r232968.
rdar://20045970, rdar://17468714
Differential Revision: http://reviews.llvm.org/D8367
llvm-svn: 232971
Fix the CodeGen so that for types bigger than float, instead of
converting to fp16 via the sequence "InTy -> float -> fp16", we
perform conversions in just one step. This avoids the double
rounding which potentially changes results from a natural
IEEE-754 operation.
rdar://17594379, rdar://17468714
Differential Revision: http://reviews.llvm.org/D4602
Part of: http://reviews.llvm.org/D8367
llvm-svn: 232968
the target-cpu, if different from the triple's cpu, and
target-features as they're written that are passed down from the
driver.
Together with LLVM r232885 this should allow the LTO'ing of binaries
that contain modules compiled with different code generation options
on a subset of architectures with full backend support (x86, powerpc,
aarch64).
llvm-svn: 232888
Somehow, we never managed to implement this fully. We could constant
fold it like crazy, including constant folding complex arguments, etc.
But if you actually needed to generate code for it, error.
I've implemented it using the somewhat obvious lowering. Happy for
suggestions on a more clever way to lower this.
Now, what you might ask does this have to do with modules? Fun story. So
it turns out that libstdc++ actually uses __builtin_isinf_sign to
implement std::isinf when in C++98 mode, but only inside of a template.
So if we're lucky, and we never instantiate that, everything is good.
But once we try to instantiate that template function, we need this
builtin. All of my customers at least are using C++11 and so they never
hit this code path.
But what does that have to do with modules? Fun story. So it turns out
that with modules we actually observe a bunch of bugs in libstdc++ where
their <cmath> header clobbers things exposed by <math.h>. To fix these,
we have to provide global function definitions to replace the macros
that C99 would have used. And it turns out that ::isinf needs to be
implemented using the exact semantics used by the C++98 variant of
std::isinf. And so I started to fix this bug in libstdc++ and ceased to
be able to compile libstdc++ with Clang.
The yaks are legion.
llvm-svn: 232778
location data is available. If pragma handling wants to look up the
position, it finds the LLVM buffer and wants to compare it with the
special built-in buffer, failing badly. Extend to the special handling
of the built-in buffer to also check for the inline asm buffer. Expect
only a single asm buffer. Sort it between the built-in buffers and the
normal file buffers.
Fixes the assert part of PR 22576.
llvm-svn: 232389
In preparation for recommit of revision 232190, change tests so that they
are resilient to operands being commuted by the reassociate pass.
llvm-svn: 232206
This is nearly identical to the v*f128_si256 parts of r231792 and r232052.
AVX2 introduced proper integer variants of the hacked integer insert/extract
C intrinsics that were created for this same functionality with AVX1.
This should complete the front end fixes for insert/extract128 intrinsics.
Corresponding LLVM patch to follow.
llvm-svn: 232109
This is very much like D8088 (checked in at r231792).
Now that we've replaced the vinsertf128 intrinsics,
do the same for their extract twins.
Differential Revision: http://reviews.llvm.org/D8275
llvm-svn: 232052
Support for the QPX vector instruction set, used on the IBM BG/Q supercomputer,
has recently been added to the LLVM PowerPC backend. This vector instruction
set requires some ABI modifications because the ABI on the BG/Q expects
<4 x double> vectors to be provided with 32-byte stack alignment, and to be
handled as native vector types (similar to how Altivec vectors are handled on
mainline PPC systems). I've named this ABI variant elfv1-qpx, have made this
the default ABI when QPX is supported, and have updated the ABI handling code
to provide QPX vectors with the correct stack alignment and associated
register-assignment logic.
llvm-svn: 231960
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.
This is the sibling patch for the LLVM half of this change:
http://reviews.llvm.org/D8086
Differential Revision: http://reviews.llvm.org/D8088
llvm-svn: 231792
This is a recommit of r231150, reverted in r231409. Turns out
that -fsanitize=shift-base check implementation only works if the
shift exponent is valid, otherwise it contains undefined behavior
itself.
Make sure we check that exponent is valid before we proceed to
check the base. Make sure that we actually report invalid values
of base or exponent if -fsanitize=shift-base or
-fsanitize=shift-exponent is specified, respectively.
llvm-svn: 231711
When passing a type with large alignment byval, we were specifying the type's
alignment rather than the alignment that the backend is actually capable of
producing (ABIAlign).
This would be OK (if odd) assuming the backend dealt with it prooperly,
unfortunately it doesn't and trying to pass types with "byval align 16" can
cause it to set fp incorrectly and trash the stack during the prologue. I'll be
fixing that in a separate patch, but Clang should still be emitting IR that's
as close to its intent as possible.
rdar://20059039
llvm-svn: 231706
It's not that easy. If we're only checking -fsanitize=shift-base we
still need to verify that exponent has sane value, otherwise
UBSan-inserted checks for base will contain undefined behavior
themselves.
llvm-svn: 231409
Opt in Win64 to supporting sjlj lowering. We have the backend lowering,
so I think this was just an oversight because WinX86_64TargetCodeGenInfo
doesn't inherit from X86_64TargetCodeGenInfo.
llvm-svn: 231280
This test doesn't provide any value (it just checks that the frontend
produces exactly one compile unit), and it certainly isn't doing what
the comment says. Noticed via IRC review of my update to it in r231083.
llvm-svn: 231152
-fsanitize=shift is now a group that includes both these checks, so
exisiting users should not be affected.
This change introduces two new UBSan kinds that sanitize only left-hand
side and right-hand side of shift operation. In practice, invalid
exponent value (negative or too large) tends to cause more portability
problems, including inconsistencies between different compilers, crashes
and inadequeate results on non-x86 architectures etc. That is,
-fsanitize=shift-exponent failures should generally be addressed first.
As a bonus, this change simplifies CodeGen implementation for emitting left
shift (separate checks for base and exponent are now merged by the
existing generic logic in EmitCheck()), and LLVM IR for these checks
(the number of basic blocks is reduced).
llvm-svn: 231150
Originally we were using the same GCC builtins to lower this AVX2 vector
intrinsic. Instead we will now lower it directly to a vector shuffle.
This will not only allow LLVM to generate better code, but it will also allow us
to remove the GCC intrinsics.
Reviewed by Andrea
This is related to rdar://problem/18742778.
llvm-svn: 231081
For global reg lvalue - use regular store through global register.
For simple lvalue - use simple atomic store.
For bitfields, vector element, extended vector elements - the original value of the whole storage (for vector elements) or of some aligned value (for bitfields) is atomically read, the part of this value for the given lvalue is modified and then use atomic compare-and-exchange operation to try to atomically write modified value (if it was not modified).
Also, changes in this patch fix the bug for '#pragma omp atomic read' applied to extended vector elements.
Differential Revision: http://reviews.llvm.org/D7369
llvm-svn: 230736
The __finally emission block tries to be clever by removing unused continuation
edges if there's an unconditional jump out of the __finally block. With
exception edges, the EH continuation edge isn't always unused though and we'd
crash in a few places.
Just don't be clever. That makes the IR for __finally blocks a bit longer in
some cases (hence small and behavior-preserving changes to existing tests), but
it makes no difference in general and it fixes the last crash from PR22553.
http://reviews.llvm.org/D7918
llvm-svn: 230697
Currently, the NaN values emitted for MIPS architectures do not cover
non-IEEE754-2008 compliant case. This change fixes the issue.
Patch by Vladimir Radosavljevic.
Differential Revision: http://reviews.llvm.org/D7882
llvm-svn: 230653
Original CL description:
Produce less broken basic block sequences for __finally blocks.
The way cleanups (such as PerformSEHFinally) get emitted is that codegen
generates some initialization code, then calls the cleanup's Emit() with the
insertion point set to a good place, then the cleanup is supposed to emit its
stuff, and then codegen might tack in a jump or similar to where the insertion
point is after the cleanup.
The PerformSEHFinally cleanup tries to just stash away the block it's supposed
to codegen into, and then does codegen later, into that stashed block. However,
after codegen'ing the __finally block, it used to set the insertion point to
the finally's continuation block (where the __finally cleanup goes when its body
is completed after regular, non-exceptional control flow). That's not correct,
as that block can (and generally does) already ends in a jump. Instead,
remember the insertion point that was current before the __finally got emitted,
and restore that.
Fixes two of the crashes in PR22553.
llvm-svn: 230503
The way cleanups (such as PerformSEHFinally) get emitted is that codegen
generates some initialization code, then calls the cleanup's Emit() with the
insertion point set to a good place, then the cleanup is supposed to emit its
stuff, and then codegen might tack in a jump or similar to where the insertion
point is after the cleanup.
The PerformSEHFinally cleanup tries to just stash away the block it's supposed
to codegen into, and then does codegen later, into that stashed block. However,
after codegen'ing the __finally block, it used to set the insertion point to
the finally's continuation block (where the __finally cleanup goes when its body
is completed after regular, non-exceptional control flow). That's not correct,
as that block can (and generally does) already ends in a jump. Instead,
remember the insertion point that was current before the __finally got emitted,
and restore that.
Fixes two of the crashes in PR22553.
llvm-svn: 230460
This is a necessary prerequisite for debugging with modules.
The .pcm files become containers that hold the serialized AST which allows
us to store debug information in the module file that can be shared by all
object files that were built importing the module.
This reapplies r230044 with a fixed configure+make build and updated
dependencies and testcase requirements. Over the last iteration this
version adds
- missing target requirements for testcases that specify an x86 triple,
- a missing clangCodeGen.a dependency to libClang.a in the make build.
rdar://problem/19104245
llvm-svn: 230423
The backend should now be able to handle all AAPCS rules based on argument
type, which means Clang no longer has to duplicate the register-counting logic
and the CodeGen can be significantly simplified.
llvm-svn: 230349
MSVC does not support C99 _Complex.
ICC, however, does support it on windows x86_64, and treats it, for purposes of parameter passing, as equivalent to a struct containing two fields (for the real and imaginary part).
Differential Revision: http://reviews.llvm.org/D7825
llvm-svn: 230315
llvm.eh.sjlj.setjmp / llvm.eh.sjlj.longjmp, if the backend is known to
support them outside the Exception Handling context. The default
handling in LLVM codegen doesn't work and will create incorrect code.
The ARM backend on the other hand will assert if the intrinsics are
used.
llvm-svn: 230255
For now -funique-section-names is the default, so no change in default behavior.
The total .o size in a build of llvm and clang goes from 241687775 to 230649031
bytes if -fno-unique-section-names is used.
llvm-svn: 230031
Summary:
The definition for _mm256_insert_epi64 was taking an int, which would get
truncated before being inserted in the vector.
Original patch by Joshua Magee!
Reviewers: bruno, craig.topper
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D7179
llvm-svn: 229811
Not all targets generate 'store atomic' instructions for
'_Atomic(_Complex int)'. Some targets use the __atomic_store builtin instead.
This commit makes the test accept either one.
llvm-svn: 229676
This is a patch for PR22563 ( http://llvm.org/bugs/show_bug.cgi?id=22563 ).
We were not correctly unwrapping a single 256-bit AVX vector that was defined as an array of 1 inside a struct.
We would generate a <4 x float> param/return value instead of <8 x float> and lose half of the vector.
Differential Revision: http://reviews.llvm.org/D7614
llvm-svn: 229408
For #pragma comment(linker, ...) MSVC expects the comment string to be quoted, but for #pragma comment(lib, ...) the compiler itself quotes the library name.
Since this distinction disappears by the time the directive reaches the backend, move quoting for the "lib" version to the frontend.
Differential Revision: http://reviews.llvm.org/D7653
llvm-svn: 229376
Bools are a little tricky, they are i8 in memory and must be coerced
back to i1 before further operations can be performed on them.
This fixes PR22577.
llvm-svn: 229204
The first change won't touch GEPOperators such as these, but the update
script only identifies them by the leading '(' after getelementptr or
'getelementptr inbounds', so update this test to at least have those
features to allow auto-migrating.
llvm-svn: 229198
The /volatile:ms semantics turn volatile loads and stores into atomic
acquire and release operations. This distinction is important because
volatile memory operations do not form a happens-before relationship
with non-atomic memory. This means that a volatile store is not
sufficient for implementing a mutex unlock routine.
Differential Revision: http://reviews.llvm.org/D7580
llvm-svn: 229082
Summary:
This patch installs an InlineAsmDiagnosticsHandler to avoid the crash
report when the input is bitcode and the bitcode contains invalid inline
assembly. The handler will simply print the same error message that will
print from the backend.
Add CHECK in test-case
Reviewers: echristo, rafael
Reviewed By: rafael
Subscribers: rafael, cfe-commits
Differential Revision: http://reviews.llvm.org/D7568
llvm-svn: 228898
a non-uniqueable temporary node that is only turned into a permanent
unique or distinct node after it is finished.
Otherwise an intermediate node may get accidentally uniqued with another
node as illustrated by the testcase.
Paired commit with LLVM.
llvm-svn: 228855
Also removed unused builtins.
Original patch by Andrea Di Biagio!
Reviewers: craig.topper, nadav
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D7199
llvm-svn: 228481
modifiers on them. If we have a matching output constraint with
an early clobber make sure we don't propagate that to the input
constraint.
llvm-svn: 228422
After r228258, Clang started emitting C++ EH IR that LLVM wasn't ready
to deal with, even when exceptions were disabled with /EHs-. This time,
make /EHs- turn off -fexceptions while still emitting exceptional
constructs in functions using __try. Since Sema rejects C++ exception
handling constructs before CodeGen, landingpads should only appear in
such functions as the result of a __try.
llvm-svn: 228329
Previously we would simply double-emit the body of the __finally block,
but that doesn't work when it contains any kind of Decl, which we can't
double emit.
This fixes that by emitting the block once and branching into a shared
code region and then branching back out.
llvm-svn: 228222
Summary:
Named registers with the constraint "=&r" currently lose the early clobber flag
and turn into "=r" when converted to LLVM-IR. This patch correctly passes it on.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D7346
llvm-svn: 228143
Create a new TargetCodeGenInfo for Windows on ARM to permit annotating the
functions with stack-probe-size (for /Gs and -mstack-probe-support) for
generating the stack probe necessary for Windows targets. This will be used by
the backend when lowering the frame to generate the stack probe appropriately.
llvm-svn: 227641
On targets which use the MSVCRT, setjmp is a macro which expands to
_setjmp or _setjmpex.
_setjmp and _setjmpex have a secret, hidden argument which is not listed
in the function prototype on X64 and WoA. This hidden argument always
seems to be the frame pointer.
_setjmpex isn't used on X86, _setjmp is magically replaced with a call
to _setjmp3. The second argument is zero for 'normal' setjmp/longjmp
pairs, otherwise it is a count of additional variadic arguments. This
is used when setjmp appears inside of a try or __try.
It is not safe to use a pointer to setjmp because _setjmp, _setjmpex and
_setmp3 are not compatible with setjmp.
llvm-svn: 227426
Summary:
It was used for interoperability with PNaCl's calling conventions, but
it's no longer needed.
Also Remove NaCl*ABIInfo which just existed to delegate to either the portable
or native ABIInfo, and remove checkCallingConvention which was now a no-op
override.
Reviewers: jvoung
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D7206
llvm-svn: 227362
The backend won't run LowerExpect on -O0. In a debug LTO build, this results in llvm.expect intrinsics being in the LTO IR which doesn't know how to optimize them.
Thanks to Chandler for the suggestion and review.
Differential revision: http://reviews.llvm.org/D7183
llvm-svn: 227135
The lowering looks a lot like normal EH lowering, with the exception
that the exceptions are caught by executing filter expression code
instead of matching typeinfo globals. The filter expressions are
outlined into functions which are used in landingpad clauses where
typeinfo would normally go.
Major aspects that still need work:
- Non-call exceptions in __try bodies won't work yet. The plan is to
outline the __try block in the frontend to keep things simple.
- Filter expressions cannot use local variables until capturing is
implemented.
- __finally blocks will not run after exceptions. Fixing this requires
work in the LLVM SEH preparation pass.
The IR lowering looks like this:
// C code:
bool safe_div(int n, int d, int *r) {
__try {
*r = normal_div(n, d);
} __except(_exception_code() == EXCEPTION_INT_DIVIDE_BY_ZERO) {
return false;
}
return true;
}
; LLVM IR:
define i32 @filter(i8* %e, i8* %fp) {
%ehptrs = bitcast i8* %e to i32**
%ehrec = load i32** %ehptrs
%code = load i32* %ehrec
%matches = icmp eq i32 %code, i32 u0xC0000094
%matches.i32 = zext i1 %matches to i32
ret i32 %matches.i32
}
define i1 zeroext @safe_div(i32 %n, i32 %d, i32* %r) {
%rr = invoke i32 @normal_div(i32 %n, i32 %d)
to label %normal unwind to label %lpad
normal:
store i32 %rr, i32* %r
ret i1 1
lpad:
%ehvals = landingpad {i8*, i32} personality i32 (...)* @__C_specific_handler
catch i8* bitcast (i32 (i8*, i8*)* @filter to i8*)
%ehptr = extractvalue {i8*, i32} %ehvals, i32 0
%sel = extractvalue {i8*, i32} %ehvals, i32 1
%filter_sel = call i32 @llvm.eh.seh.typeid.for(i8* bitcast (i32 (i8*, i8*)* @filter to i8*))
%matches = icmp eq i32 %sel, %filter_sel
br i1 %matches, label %eh.except, label %eh.resume
eh.except:
ret i1 false
eh.resume:
resume
}
Reviewers: rjmccall, rsmith, majnemer
Differential Revision: http://reviews.llvm.org/D5607
llvm-svn: 226760
It fails on Windows due to another temporary being emitted first, so the
LLVM internal renaming scheme gives out the name
__block_descriptor_tmp1.
llvm-svn: 226757
Currently we emit DeferredDeclsToEmit in reverse order. This patch changes that.
The advantages of the change are that
* The output order is a bit closer to the source order. The change to
test/CodeGenCXX/pod-member-memcpys.cpp is a good example.
* If we decide to deffer more, it will not cause as large changes in the
estcases as it would without this patch.
llvm-svn: 226751
Analogous to AVX2, these need to be implemented as macros to properly
propagate the immediate index operand.
Part of <rdar://problem/17688758>
llvm-svn: 226496
Summary:
This fixes MultiSource/Applications/lemon on big-endian N32 by correcting the
handling of the argument to wait(). glibc defines it as a transparent union of
void* and int*. Such unions are passed according to the rules of the first
member so the argument must be passed as if it were a void* (sign extended from
i32 to i64) and not as a union (shifted to the upper bits of an i64).
wait() already behaves correctly on big-endian O32 and N64 since the union is
already the same size as an argument slot.
Reviewers: atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6963
llvm-svn: 225981
These are implemented with __builtin_shufflevector just like AVX.
We have some tests on the LLVM side to assert that these shufflevectors do
indeed generate the corresponding unpck instruction.
Part of <rdar://problem/17688758>
llvm-svn: 225922
Summary:
The Mips ABI's treat pointers in the same way as integers. They are
sign-extended to 32-bit for O32, and 64-bit for N32/N64. This doesn't matter
for O32 and N64 where pointers are already the correct width but it does matter
for big-endian N32, where pointers are 32-bit and need promoting.
The caller side is already passing pointers correctly. This patch corrects the
callee.
Reviewers: vmedic, atanasyan
Reviewed By: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6812
llvm-svn: 225782
Introduce the following -fsanitize-recover flags:
- -fsanitize-recover=<list>: Enable recovery for selected checks or
group of checks. It is forbidden to explicitly list unrecoverable
sanitizers here (that is, "address", "unreachable", "return").
- -fno-sanitize-recover=<list>: Disable recovery for selected checks or
group of checks.
- -f(no-)?sanitize-recover is now a synonym for
-f(no-)?sanitize-recover=undefined,integer and will soon be deprecated.
These flags are parsed left to right, and mask of "recoverable"
sanitizer is updated accordingly, much like what we do for -fsanitize= flags.
-fsanitize= and -fsanitize-recover= flag families are independent.
CodeGen change: If there is a single UBSan handler function, responsible
for implementing multiple checks, which have different recoverable setting,
then we emit two handler calls instead of one:
the first one for the set of "unrecoverable" checks, another one - for
set of "recoverable" checks. If all checks implemented by a handler have the
same recoverability setting, then the generated code will be the same.
llvm-svn: 225719
The llvm IR until recently had no support for comdats. This was a problem when
targeting C++ on ELF/COFF as just using weak linkage would cause quite a bit of
dead bits to remain on the executable (unless -ffunction-sections,
-fdata-sections and --gc-sections were used).
To fix the problem, llvm's codegen will just assume that any weak or linkonce
that is not in an explicit comdat should be output in one with the same name as
the global.
This unfortunately breaks cases like pr19848 where a weak symbol is not
xpected to be part of any comdat.
Now that we have explicit comdats in the IR, we can finally get both cases
right.
This first patch just makes clang give explicit comdats to GlobalValues where
t is allowed to.
A followup patch to llvm will then stop implicitly producing comdats.
llvm-svn: 225705
Between this behavior and that fixed by r225083/r225000, I'll take the
latter over the former for now, but I'm immediately working on
understanding/addressing this behavior too.
(the fact that the code change in r225083 caused this change in behavior
is a bit troubling anyway - given that it looks & claims to be just a
preformance thing)
llvm-svn: 225086
This still lower to the same intrinsics as before.
This is preparation for bounds checking the immediate on the avx version of the builtin so we don't pass illegal immediates into the backend. Since SSE uses a smaller size immediate its not possible to bounds check when using a shared builtin. Rather than creating a clang specific builtin for the different immediate, I decided (after consulting with Chandler) that it was better to match gcc.
llvm-svn: 224879
The lit.cfg files only add .cpp to suffixes, so these tests used to never run,
oops. (Also tweak to of these tests in minor ways to make the actually pass.)
llvm-svn: 224718
Fixed assertion on type checking for arguments and parameters on function call if arguments are pointers to VLA
Differential Revision: http://reviews.llvm.org/D6655
llvm-svn: 224504
use clang -cc1 matching the front end and backend. Fix up a couple
of tests that were testing aapcs for arm-linux-gnu.
The test that removes the aapcs abi calling convention removes
them because the default triple matches what the backend uses
for the calling convention there and so it doesn't need to be
explicitly stated - see the code in TargetInfo.cpp.
llvm-svn: 224491
For MSVC compatibility, add the `__emit' builtin. This is used in the Windows
SDK headers, and must therefore be implemented as a builtin rather than an
intrinsic.
The `__emit' builtin provides a mechanism to emit a 16-bit opcode instruction
into the stream. The value must be a compile time constant expression. No
guarantees are made about the CPU and memory states after the execution of the
instruction.
Due to the unchecked nature of the builtin, only support this on Windows on ARM.
llvm-svn: 224438
Summary:
Because GCC doesn't use $1 for code generation, inline assembly code can use $1 without having to add it to the clobbers list.
LLVM, on the other hand, does not shy away from using $1, and this can cause conflicts with inline assembly which assumes GCC-like code generation.
A solution to this problem is to make Clang automatically clobber $1 for all MIPS inline assembly.
This is not the optimal solution, but it seems like a necessary compromise, for now.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6638
llvm-svn: 224428
Currently clang fires assertions on x86-64 on any atomic operations for long double operands. Patch fixes codegen for such operations.
Differential Revision: http://reviews.llvm.org/D6499
llvm-svn: 224230
having OptimizeNone remove them again, just don't add them in the
first place if the function already has OptimizeNone.
Note that MinSize can still appear due to attributes on different
declarations; a future patch will address that.
llvm-svn: 224047
Summary:
When -fsanitize-address-field-padding=1 is present
don't emit memcpy for copy constructor.
Thanks Nico for the extra test case.
Test Plan: regression tests
Reviewers: thakis, rsmith
Reviewed By: rsmith
Subscribers: rsmith, cfe-commits
Differential Revision: http://reviews.llvm.org/D6515
llvm-svn: 223563
is for each machine. Fix up darwin tests that were testing for
aapcs on armv7-ios when the actual ABI is apcs.
Should be no user visible change without -cc1.
llvm-svn: 223429
ARM ABI specifies that all the libcalls use soft FP ABI
(even hard FP binaries). These days clang emits _mulsc3 / _muldc3
calls with default (C) calling convention which would be translated
into AAPCS_VFP LLVM calling and thus the result of complex
multiplication will be bogus.
Introduce a way for a target to specify explicitly calling
convention for libcalls. Right now this is temporary correctness
fix. Ultimately, we'll end with intrinsic for complex
multiplication and all calling convention decisions for libcalls
will be put into backend.
llvm-svn: 223123
Now that LLVM can count the registers needed to implement AAPCS rules, we don't
need to duplicate that logic here. This means we can drop the explicit padding
and also use more natural types in many cases (e.g. "struct { float arr[3]; }"
used to end up as "[2 x double]" to avoid holes on the stack.
The one wrinkle is that AAPCS va_arg was also using the register counting
machinery. But the local replacement isn't too bad.
llvm-svn: 222904
Cygwin and MinGW fail to conform to the underlying system's structure passing
ABI. Make the check more precise to ensure that we correctly generate code for
the itanium environment.
llvm-svn: 222626
"global-init", "global-init-src" and "global-init-type" were originally
used to blacklist entities in ASan init-order checker. However, they
were never documented, and later were replaced by "=init" category.
Old blacklist entries should be converted as follows:
* global-init:foo -> global:foo=init
* global-init-src:bar -> src:bar=init
* global-init-type:baz -> type:baz=init
llvm-svn: 222401
This reverts commit r222144. Commit r222142 is being reverted due to
a spec2006/gcc execution-time regression.
Update mips-varargs test as well.
llvm-svn: 222397
Summary:
With this patch, passing a va_list to another function and reading 10 int's from
it works correctly on a big-endian target.
Based on a pair of patches by David Chisnall, one of which I've reworked
for the current trunk.
Reviewers: theraven, atanasyan
Reviewed By: theraven, atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6248
llvm-svn: 222339
Summary:
This distinguishes between -fpic and -fPIC now, with the additions in LLVM for
PIC level support.
Test Plan: No regressions
Reviewers: echristo, rafael
Reviewed By: rafael
Subscribers: rnk, emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D5400
llvm-svn: 222227
used inside blocks. It fixes a crash in naming code
for __func__ etc. when used in a block declared globally.
It also brings back old naming convention for
predefined expression which was broken. rdar://18961148
llvm-svn: 222065
Summary:
Ok, here is somewhat addition to D6217 aiming to preserve old darwin behavior wrt the typedefed types. The actual change to SemaChecking turned out to be pretty gross, in particular:
1. We need to extract the typedef'ed type for proper diagnostics
2. We need to walk over paren expressions as well
Reviewers: chandlerc, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6256
llvm-svn: 222044
VSX makes the "vector long long" and "vector double" types available.
This patch enables the vec_perm interface for these types. The same
builtin is generated regardless of the specified type, so no
additional work or testing is needed in the back end. Tests are added
to ensure this builtin is generated by the front end.
llvm-svn: 221988
This patch adds builtin support for xvdivdp and xvdivsp, along with a
new test case. The builtins are accessed using vec_div in altivec.h.
Builtins are listed (mostly) alphabetically there, so inserting these
changed the line numbers for deprecation warnings tested in
test/Headers/altivec-intrin.c.
There is a companion patch for LLVM.
llvm-svn: 221984
Summary:
Consider the following nifty 1 liner: (0 ? csqrtl(2.0f) : sqrtl(2.0f)). One can easily obtain such code from e.g. tgmath. Right now it produces an assertion because we fail to do the promotion real => _Complex real.
The case was properly handled previously (old handleOtherComplexFloatConversion routine), but was forgotten in the current version. This seems to be about fallout from r219557
Reviewers: chandlerc, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6217
llvm-svn: 221821
This patch enables the vec_vsx_ld and vec_vsx_st intrinsics for
PowerPC, which provide programmer access to the lxvd2x, lxvw4x,
stxvd2x, and stxvw4x instructions.
New code in altivec.h defines these in terms of new builtins, which
are themselves defined in BuiltinsPPC.def. The builtins are converted
to LLVM intrinsics in CGBuiltin.cpp. Additional code is added to
builtins-ppc-vsx.c to verify the correct generation of the intrinsics.
Note that I moved the other VSX builtins so all VSX builtins will be
alphabetical in their own section in BuiltinsPPC.def.
There is a companion patch for LLVM.
llvm-svn: 221768
Summary: If we've added poisoned paddings to a type do not emit memcpy for operator=.
Test Plan: regression tests.
Reviewers: majnemer, rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6160
llvm-svn: 221739
Summary:
This change makes CodeGenFunction::EmitCheck() take several
conditions that needs to be checked (all of them need to be true),
together with sanitizer kinds these checks are for. This would allow
to split one call into UBSan runtime into several calls in case
different sanitizer kinds would have different recoverability
settings.
Tests should be fixed accordingly, I'm working on it.
Test Plan: regression test suite.
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6219
llvm-svn: 221716
Homogeneous aggregates on AAPCS_VFP ARM need to be passed *without* being
flattened (e.g. [2 x float] rather than "float, float") for various weird ABI
reasons. However, this isn't the case for anything else; further, we know at
the ABIArgInfo::getDirect callsites whether this flattening is allowed.
So, we can get more unified ARM code, with a simpler Clang, by just using that
knowledge directly.
llvm-svn: 221559
mingw64's headers implement fabs by calling __builtin_fabs, so using the
library call results in an infinite loop. If the backend legalizes
@llvm.fabs as a call to fabs later, things should work out, as the crt
provides a definition.
llvm-svn: 221206
It turns out that MinGW never dllimports of exports inline functions.
This means that code compiled with Clang would fail to link with
MinGW-compiled libraries since we might try to import functions that
are not imported.
To fix this, make Clang never dllimport inline functions when targeting
MinGW.
llvm-svn: 221154
The most complex aspect of the convention is the handling of homogeneous
vector and floating point aggregates. Reuse the homogeneous aggregate
classification code that we use on PPC64 and ARM for this.
This convention also has a C mangling, and we apparently implement that
in both Clang and LLVM.
Reviewed By: majnemer
Differential Revision: http://reviews.llvm.org/D6063
llvm-svn: 221006