Support for the QPX vector instruction set, used on the IBM BG/Q supercomputer,
has recently been added to the LLVM PowerPC backend. This vector instruction
set requires some ABI modifications because the ABI on the BG/Q expects
<4 x double> vectors to be provided with 32-byte stack alignment, and to be
handled as native vector types (similar to how Altivec vectors are handled on
mainline PPC systems). I've named this ABI variant elfv1-qpx, have made this
the default ABI when QPX is supported, and have updated the ABI handling code
to provide QPX vectors with the correct stack alignment and associated
register-assignment logic.
llvm-svn: 231960
This adds support for copy-constructor closures. These are generated
when the C++ runtime has to call a copy-constructor with a particular
calling convention or with default arguments substituted in to the call.
Because the runtime has no mechanism to call the function with a
different calling convention or know-how to evaluate the default
arguments at run-time, we create a thunk which will do all the
appropriate work and package it in a way the runtime can use.
Differential Revision: http://reviews.llvm.org/D8225
llvm-svn: 231952
This patch allows using of ExprWithCleanups expressions and other complex expressions in 'omp atomic' construct
Differential Revision: http://reviews.llvm.org/D8200
llvm-svn: 231905
Because the catchable type has a reference to its name, mangle the
location to ensure that two catchable types with different locations are
distinct.
llvm-svn: 231819
The task region is emmitted in several steps:
Emit a call to kmp_task_t *__kmpc_omp_task_alloc(ident_t *, kmp_int32 gtid, kmp_int32 flags, size_t sizeof_kmp_task_t, size_t sizeof_shareds, kmp_routine_entry_t *task_entry).
Here task_entry is a pointer to the function:
kmp_int32 .omp_task_entry.(kmp_int32 gtid, kmp_task_t *tt) {
TaskFunction(gtid, tt->part_id, tt->shareds);
return 0;
}
Copy a list of shared variables to field shareds of the resulting structure kmp_task_t returned by the previous call (if any).
Copy a pointer to destructions function to field destructions of the resulting structure kmp_task_t.
Emit a call to kmp_int32 __kmpc_omp_task(ident_t *, kmp_int32 gtid, kmp_task_t *new_task), where new_task is a resulting structure from previous items.
Differential Revision: http://reviews.llvm.org/D7560
llvm-svn: 231762
Patch adds proper generation of debug info for all OpenMP regions. Also, all OpenMP regions are generated in a termination scope, because standard does not allow to throw exceptions out of structured blocks, associated with the OpenMP regions
Differential Revision: http://reviews.llvm.org/D7935
llvm-svn: 231757
This reverts commit r231752.
It was failing to link with cmake:
lib64/libclangCodeGen.a(CGOpenMPRuntime.cpp.o):/home/espindola/llvm/llvm/tools/clang/lib/CodeGen/CGOpenMPRuntime.cpp:function clang::CodeGen::InlinedOpenMPRegionRAII::~InlinedOpenMPRegionRAII(): error: undefined reference to 'clang::CodeGen::EHScopeStack::popTerminate()'
clang-3.7: error: linker command failed with exit code 1 (use -v to see invocation)
llvm-svn: 231754
Patch adds proper generation of debug info for all OpenMP regions. Also, all OpenMP regions are generated in a termination scope, because standard does not allow to throw exceptions out of structured blocks, associated with the OpenMP regions
Differential Revision: http://reviews.llvm.org/D7935
llvm-svn: 231752
This is a recommit of r231150, reverted in r231409. Turns out
that -fsanitize=shift-base check implementation only works if the
shift exponent is valid, otherwise it contains undefined behavior
itself.
Make sure we check that exponent is valid before we proceed to
check the base. Make sure that we actually report invalid values
of base or exponent if -fsanitize=shift-base or
-fsanitize=shift-exponent is specified, respectively.
llvm-svn: 231711
When passing a type with large alignment byval, we were specifying the type's
alignment rather than the alignment that the backend is actually capable of
producing (ABIAlign).
This would be OK (if odd) assuming the backend dealt with it prooperly,
unfortunately it doesn't and trying to pass types with "byval align 16" can
cause it to set fp incorrectly and trash the stack during the prologue. I'll be
fixing that in a separate patch, but Clang should still be emitting IR that's
as close to its intent as possible.
rdar://20059039
llvm-svn: 231706
I disabled putting the new global into the same COMDAT as the function for now.
There's a fundamental problem when we inline references to the global but still
have the global in a COMDAT linked to the inlined function. Since this is only
an optimization there may be other versions of the COMDAT around that are
missing the new global and hell breaks loose at link time.
I hope the chromium build doesn't break this time :)
llvm-svn: 231564
This broke the Chromium build. Links were failing with messages like:
obj/dbus/libdbus_test_support.a(obj/dbus/dbus_test_support.mock_object_proxy.o):../../dbus/mock_object_proxy.cc:function dbus::MockObjectProxy::Detach(): warning: relocation refers to discarded section
/usr/local/google/work/chromium/src/third_party/binutils/Linux_x64/Release/bin/ld.gold: error: treating warnings as errors
llvm-svn: 231541
of extern "C" declarations. This is simpler and vastly more efficient for
modules builds (we no longer need to load *all* extern "C" declarations to
determine if we have a redeclaration).
No functionality change intended.
llvm-svn: 231538
Instead of creating a copy on the stack just stash them in a private
constant global. This saves both the copying overhead and the stack
space, and gives the optimizer more room to constant fold.
This tries to make array temporaries more similar to regular arrays,
they can't use the same logic because a temporary has no VarDecl to be
bound to so we roll our own version here.
The original use case for this optimization was code like
for (int i : {1, 2, 3, 4, 5, 6, 7, 8, 10})
foo(i);
where without this patch (assuming that the loop is not unrolled) we
would alloca an array on the stack, copy the 10 values over and
iterate on that. With this patch we put the array in .text use it
directly. Apart from that case this helps on virtually any passing of
a constant std::initializer_list as a function argument.
Differential Revision: http://reviews.llvm.org/D8034
llvm-svn: 231508
Find all unambiguous public classes of the exception object's class type
and reference all of their copy constructors. Yes, this is not
conforming but it is necessary in order to implement their ABI. This is
because the copy constructor is actually referenced by the metadata
describing which catch handlers are eligible to handle the exception
object.
N.B. This doesn't yet handle the copy constructor closure case yet,
that work is ongoing.
Differential Revision: http://reviews.llvm.org/D8101
llvm-svn: 231499
It's not that easy. If we're only checking -fsanitize=shift-base we
still need to verify that exponent has sane value, otherwise
UBSan-inserted checks for base will contain undefined behavior
themselves.
llvm-svn: 231409
Throwing a C++ exception, under the MS ABI, is implemented using three
components:
- ThrowInfo structure which contains information like CV qualifiers,
what destructor to call and a pointer to the CatchableTypeArray.
- In a significant departure from the Itanium ABI, copying by-value
occurs in the runtime and not at the catch site. This means we need
to enumerate all possible types that this exception could be caught as
and encode the necessary information to convert from the exception
object's type to the catch handler's type. This includes complicated
derived to base conversions and the execution of copy-constructors.
N.B. This implementation doesn't support the execution of a
copy-constructor from within the runtime for now. Adding support for
that functionality is quite difficult due to things like default
argument expressions which may evaluate arbitrary code hiding in the
copy-constructor's parameters.
Differential Revision: http://reviews.llvm.org/D8066
llvm-svn: 231328
Opt in Win64 to supporting sjlj lowering. We have the backend lowering,
so I think this was just an oversight because WinX86_64TargetCodeGenInfo
doesn't inherit from X86_64TargetCodeGenInfo.
llvm-svn: 231280
-fsanitize=shift is now a group that includes both these checks, so
exisiting users should not be affected.
This change introduces two new UBSan kinds that sanitize only left-hand
side and right-hand side of shift operation. In practice, invalid
exponent value (negative or too large) tends to cause more portability
problems, including inconsistencies between different compilers, crashes
and inadequeate results on non-x86 architectures etc. That is,
-fsanitize=shift-exponent failures should generally be addressed first.
As a bonus, this change simplifies CodeGen implementation for emitting left
shift (separate checks for base and exponent are now merged by the
existing generic logic in EmitCheck()), and LLVM IR for these checks
(the number of basic blocks is reduced).
llvm-svn: 231150
Originally we were using the same GCC builtins to lower this AVX2 vector
intrinsic. Instead we will now lower it directly to a vector shuffle.
This will not only allow LLVM to generate better code, but it will also allow us
to remove the GCC intrinsics.
Reviewed by Andrea
This is related to rdar://problem/18742778.
llvm-svn: 231081
isSingleElementStruct was a bit too tight in its definition of struct
so we got a mismatch between classify() and the actual code generation.
To make matters worse the code in GetByteVectorType still defaulted to
<2 x double> if it encountered a type it didn't know, making this a
silent miscompilation (PR22753).
Completely remove the "preferred type" stuff from GetByteVectorType and
make it fail an assertion if someone tries to use it with a type not
suitable for a vector register.
llvm-svn: 230971
When generating debug info for a static inline member which is initialized for
the DLLExport storage class, hoist the definition into a non-composite type
context. Otherwise, we would trigger an assertion when generating the DIE for
the associated global value as the debug context has a type association. This
addresses PR22669.
Thanks to David Blakie for help in coming up with a solution to this!
llvm-svn: 230816
For global reg lvalue - use regular store through global register.
For simple lvalue - use simple atomic store.
For bitfields, vector element, extended vector elements - the original value of the whole storage (for vector elements) or of some aligned value (for bitfields) is atomically read, the part of this value for the given lvalue is modified and then use atomic compare-and-exchange operation to try to atomically write modified value (if it was not modified).
Also, changes in this patch fix the bug for '#pragma omp atomic read' applied to extended vector elements.
Differential Revision: http://reviews.llvm.org/D7369
llvm-svn: 230736
The __finally emission block tries to be clever by removing unused continuation
edges if there's an unconditional jump out of the __finally block. With
exception edges, the EH continuation edge isn't always unused though and we'd
crash in a few places.
Just don't be clever. That makes the IR for __finally blocks a bit longer in
some cases (hence small and behavior-preserving changes to existing tests), but
it makes no difference in general and it fixes the last crash from PR22553.
http://reviews.llvm.org/D7918
llvm-svn: 230697
Use the newly minted `DIImportedEntity` default constructor (r230609)
rather than explicitly specifying `nullptr`. The latter will become
ambiguous when the new debug info hierarchy is committed, since we'll
have both of the following:
explicit DIImportedEntity(const MDNode *);
DIImportedEntity(const MDImportedEntity *);
(Currently we just have the former.)
A default constructor is just as clear.
llvm-svn: 230610
Do not declare sized deallocation functions dependently on whether it is found in global scope. Instead, enforce the branching in emitted code by (1) declaring the functions extern_weak and (2) emitting sized delete expressions as a branching between both forms delete.
llvm-svn: 230580
Original CL description:
Produce less broken basic block sequences for __finally blocks.
The way cleanups (such as PerformSEHFinally) get emitted is that codegen
generates some initialization code, then calls the cleanup's Emit() with the
insertion point set to a good place, then the cleanup is supposed to emit its
stuff, and then codegen might tack in a jump or similar to where the insertion
point is after the cleanup.
The PerformSEHFinally cleanup tries to just stash away the block it's supposed
to codegen into, and then does codegen later, into that stashed block. However,
after codegen'ing the __finally block, it used to set the insertion point to
the finally's continuation block (where the __finally cleanup goes when its body
is completed after regular, non-exceptional control flow). That's not correct,
as that block can (and generally does) already ends in a jump. Instead,
remember the insertion point that was current before the __finally got emitted,
and restore that.
Fixes two of the crashes in PR22553.
llvm-svn: 230503
The way cleanups (such as PerformSEHFinally) get emitted is that codegen
generates some initialization code, then calls the cleanup's Emit() with the
insertion point set to a good place, then the cleanup is supposed to emit its
stuff, and then codegen might tack in a jump or similar to where the insertion
point is after the cleanup.
The PerformSEHFinally cleanup tries to just stash away the block it's supposed
to codegen into, and then does codegen later, into that stashed block. However,
after codegen'ing the __finally block, it used to set the insertion point to
the finally's continuation block (where the __finally cleanup goes when its body
is completed after regular, non-exceptional control flow). That's not correct,
as that block can (and generally does) already ends in a jump. Instead,
remember the insertion point that was current before the __finally got emitted,
and restore that.
Fixes two of the crashes in PR22553.
llvm-svn: 230460