simd`.
Added host codegen + codegen for devices with default codegen for
`#pragma omp target teams distribute parallel for simd` directive.
llvm-svn: 322515
This alignment can be less than 4 on certain embedded targets, which may
not even be able to deal with 4-byte alignment on the stack.
Patch by Jacob Young!
llvm-svn: 322406
getAssociatedStmt() returns the outermost captured statement for the
OpenMP directive. It may return incorrect region in case of combined
constructs. Reworked the code to reduce the number of calls of
getAssociatedStmt() and used getInnermostCapturedStmt() and
getCapturedStmt() functions instead.
In case of firstprivate variables it may lead to an extra allocas
generation for private copies even if the variable is passed by value
into outlined function and could be used directly as private copy.
llvm-svn: 322393
GCC's attribute 'target', in addition to being an optimization hint,
also allows function multiversioning. We currently have the former
implemented, this is the latter's implementation.
This works by enabling functions with the same name/signature to coexist,
so that they can all be emitted. Multiversion state is stored in the
FunctionDecl itself, and SemaDecl manages the definitions.
Note that it ends up having to permit redefinition of functions so
that they can all be emitted. Additionally, all versions of the function
must be emitted, so this also manages that.
Note that this includes some additional rules that GCC does not, since
defining something as a MultiVersion function after a usage has been made illegal.
The only 'history rewriting' that happens is if a function is emitted before
it has been converted to a multiversion'ed function, at which point its name
needs to be changed.
Function templates and virtual functions are NOT yet supported (not supported
in GCC either).
Additionally, constructors/destructors are disallowed, but the former is
planned.
llvm-svn: 322028
only.
Added support for -fopenmp-simd option that allows compilation of
simd-based constructs without emission of OpenMP runtime calls.
llvm-svn: 321560
...when such an operation is done on an object during con-/destruction.
This is the cfe part of a patch covering both cfe and compiler-rt.
Differential Revision: https://reviews.llvm.org/D40295
llvm-svn: 321519
Diagnose 'unreachable' UB when a noreturn function returns.
1. Insert a check at the end of functions marked noreturn.
2. A decl may be marked noreturn in the caller TU, but not marked in
the TU where it's defined. To diagnose this scenario, strip away the
noreturn attribute on the callee and insert check after calls to it.
Testing: check-clang, check-ubsan, check-ubsan-minimal, D40700
rdar://33660464
Differential Revision: https://reviews.llvm.org/D40698
llvm-svn: 321231
Summary:
The -fxray-always-emit-customevents flag instructs clang to always emit
the LLVM IR for calls to the `__xray_customevent(...)` built-in
function. The default behaviour currently respects whether the function
has an `[[clang::xray_never_instrument]]` attribute, and thus not lower
the appropriate IR code for the custom event built-in.
This change allows users calling through to the
`__xray_customevent(...)` built-in to always see those calls lowered to
the corresponding LLVM IR to lay down instrumentation points for these
custom event calls.
Using this flag enables us to emit even just the user-provided custom
events even while never instrumenting the start/end of the function
where they appear. This is useful in cases where "phase markers" using
__xray_customevent(...) can have very few instructions, must never be
instrumented when entered/exited.
Reviewers: rnk, dblaikie, kpw
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D40601
llvm-svn: 319388
This updates -mcount to use the new attribute names (LLVM r318195), and
switches over -finstrument-functions to also use these attributes rather
than inserting instrumentation in the frontend.
It also adds a new flag, -finstrument-functions-after-inlining, which
makes the cygprofile instrumentation get inserted after inlining rather
than before.
Differential Revision: https://reviews.llvm.org/D39331
llvm-svn: 318199
Summary:
We don't want to store cleanup dest slot saved into the coroutine frame (as some of the cleanup code may
access them after coroutine frame destroyed).
This is an alternative to https://reviews.llvm.org/D37093
It is possible to do this for all functions, but, cursory check showed that in -O0, we get slightly longer function (by 1-3 instructions), thus, we are only limiting cleanup.dest.slot elimination to coroutines.
Reviewers: rjmccall, hfinkel, eric_niebler
Reviewed By: eric_niebler
Subscribers: EricWF, cfe-commits
Differential Revision: https://reviews.llvm.org/D39768
llvm-svn: 317981
This patch fixes various places in clang to propagate may-alias
TBAA access descriptors during construction of lvalues, thus
eliminating the need for the LValueBaseInfo::MayAlias flag.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D39008
llvm-svn: 316988
This patch addresses the rest of the cases where we pass lvalue
base info, but do not provide corresponding TBAA info.
This patch should not bring in any functional changes.
This is part of D38126 reworked to be a separate patch to make
reviewing easier.
Differential Revision: https://reviews.llvm.org/D38945
llvm-svn: 315986
In OpenCL the kernel function and non-kernel function has different calling conventions.
For certain targets they have different argument ABIs. Also kernels have special function
attributes and metadata for runtime to launch them.
The blocks passed to enqueue_kernel is supposed to be executed as kernels. As such,
the block invoke function should be emitted as kernel with proper calling convention and
argument ABI.
This patch emits enqueued block as kernel. If a block is both called directly and passed
to enqueue_kernel, separate functions will be generated.
Differential Revision: https://reviews.llvm.org/D38134
llvm-svn: 315804
This patch enables explicit generation of TBAA information in all
cases where LValue base info is propagated or constructed in
non-trivial ways. Eventually, we will consider each of these
cases to make sure the TBAA information is correct and not too
conservative. For now, we just fall back to generating TBAA info
from the access type.
This patch should not bring in any functional changes.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38733
llvm-svn: 315575
Besides obvious code simplification, avoiding explicit creation
of LValueBaseInfo objects makes it easier to make TBAA
information to be part of such objects.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38695
llvm-svn: 315289
The Cpu Init functionality is required for the target
attribute, so this patch simply splits it out into its own
function, exactly like CpuIs and CpuSupports.
llvm-svn: 315075
This patch is an attempt to clarify and simplify generation and
propagation of TBAA information. The idea is to pack all values
that describe a memory access, namely, base type, access type and
offset, into a single structure. This is supposed to make further
changes, such as adding support for unions and array members,
easier to prepare and review.
DecorateInstructionWithTBAA() is no more responsible for
converting types to tags. These implicit conversions not only
complicate reading the code, but also suggest assigning scalar
access tags while we generally prefer full-size struct-path tags.
TBAAPathTag is replaced with TBAAAccessInfo; the latter is now
the type of the keys of the cache map that translates access
descriptors to metadata nodes.
Fixed a bug with writing to a wrong map in
getTBAABaseTypeMetadata() (former getTBAAStructTypeInfo()).
We now check for valid base access types every time we
dereference a field. The original code only checks the top-level
base type. See isValidBaseType() / isTBAAPathStruct() calls.
Some entities have been renamed to sound more adequate and less
confusing/misleading in presence of path-aware TBAA information.
Now we do not lookup twice for the same cache entry in
getAccessTagInfo().
Refined relevant comments and descriptions.
Differential Revision: https://reviews.llvm.org/D37826
llvm-svn: 315048
code size.
Currently clang expands a call to __builtin_os_log_format into a long
sequence of instructions at the call site, causing code size to
increase in some cases.
This commit attempts to reduce code size by emitting a helper function
that can be shared by calls to __builtin_os_log_format with similar
formats and arguments. The helper function has linkonce_odr linkage to
enable the linker to merge identical functions across translation units.
Attribute 'noinline' is attached to the helper function at -Oz so that
the inliner doesn't inline functions that can potentially be merged.
This commit also fixes a bug where the generated IR writes past the end
of the buffer when "%m" is the last specifier appearing in the format
string passed to __builtin_os_log_format.
Original patch by Duncan Exon Smith.
rdar://problem/34065973
rdar://problem/34196543
Differential Revision: https://reviews.llvm.org/D38606
llvm-svn: 315045
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.
Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
of types to access tags anymore. Instead, it takes an access
descriptor (TBAAAccessInfo) and generates corresponding access
tag from it.
* getTBAAInfoForVTablePtr() reworked to
getTBAAVTablePtrAccessInfo() that now returns the
virtual-pointer access descriptor and not the virtual-point
type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
it is the only way to generate access tag by a given access
descriptor. It is capable of producing both scalar and
struct-path tags, depending on options and availability of the
base access type. getTBAAScalarTagInfo() and its cache
ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
access tag should be a scalar or struct-path one,
getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
descriptor by a given QualType access type.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38503
llvm-svn: 314977
With this patch we implement a concept of TBAA access descriptors
that are capable of representing both scalar and struct-path
accesses in a generic way.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38456
llvm-svn: 314780
This patch fixes misleading names of entities related to getting,
setting and generation of TBAA access type descriptors.
This is effectively an attempt to provide a review for D37826 by
breaking it into smaller pieces.
Differential Revision: https://reviews.llvm.org/D38404
llvm-svn: 314657
directives.
If the variable is used in the target-based region but is not found in
any private|mapping clause, then generate implicit firstprivate|map
clauses for these implicitly mapped variables.
llvm-svn: 314205
body of global block invoke functions.
This commit fixes an infinite loop in IRGen that occurs when compiling
the following code:
void FUNC2() {
static void (^const block1)(int) = ^(int a){
if (a--)
block1(a);
};
}
This is how IRGen gets stuck in the infinite loop:
1. GenerateBlockFunction is called to emit the body of "block1".
2. GetAddrOfGlobalBlock is called to get the address of "block1". The
function calls getAddrOfGlobalBlockIfEmitted to check whether the
global block has been emitted. If it hasn't been emitted, it then
tries to emit the body of the block function by calling
GenerateBlockFunction, which goes back to step 1.
This commit prevents the inifinite loop by building the global block in
GenerateBlockFunction before emitting the body of the block function.
rdar://problem/34541684
Differential Revision: https://reviews.llvm.org/D38118
llvm-svn: 314029
If the captured variable has re-declaration we may end up with the
situation where the captured variable is the re-declaration while the
referenced variable is the canonical declaration (or vice versa). In
this case we may generate wrong code. Patch fixes this situation.
llvm-svn: 313995
This change will make it possible to use -fsanitize=function on Darwin and
possibly on other platforms. It fixes an issue with the way RTTI is stored into
function prologue data.
On Darwin, addresses stored in prologue data can't require run-time fixups and
must be PC-relative. Run-time fixups are undesirable because they necessitate
writable text segments, which can lead to security issues. And absolute
addresses are undesirable because they break PIE mode.
The fix is to create a private global which points to the RTTI, and then to
encode a PC-relative reference to the global into prologue data.
Differential Revision: https://reviews.llvm.org/D37597
llvm-svn: 313096
Summary:
1.Updated annotations for include/clang/StaticAnalyzer/Core/PathSensitive/Store.h, which belong to the old version of clang.
2.Delete annotations for CodeGenFunction::getEvaluationKind() in clang/lib/CodeGen/CodeGenFunction.h, which belong to the old version of clang.
Reviewers: bkramer, krasimir, klimek
Reviewed By: bkramer
Subscribers: MTC
Differential Revision: https://reviews.llvm.org/D36330
Contributed by @MTC!
llvm-svn: 312790
Summary:
As the attributed statements are considered simple statements no
stoppoint was generated before emitting attributed do/while/for/range-
statement. This lead to faulty debug locations.
Reviewers: echristo, aaron.ballman, dblaikie
Reviewed By: dblaikie
Subscribers: bjope, aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D37428
llvm-svn: 312623
"target" implementation
A small set of refactors that'll make it easier for me to implement 'target'
support.
First, extract the CPUSupports functionality into its own function.
THis has the advantage of not wasting time in this builtin to deal with
arguments.
Second, pulls both CPUSupports and CPUIs implementation into a member-function,
so that it can be called from the resolver generation that I'm working on.
Third, creates an overload that takes simply the feature/cpu name (rather than
extracting it from a callexpr), since that info isn't available later.
Note that despite how the 'diff' looks, the EmitX86CPUSupports function simply
takes the implementation out of the 'switch'.
llvm-svn: 312355
expressions
C++ allows us to reference static variables through member expressions. Prior to
this commit, non-integer static variables that were referenced using a member
expression were always emitted using lvalue loads. The old behaviour introduced
an inconsistency between regular uses of static variables and member expressions
uses. For example, the following program compiled and linked successfully:
struct Foo {
constexpr static const char *name = "foo";
};
int main() {
return Foo::name[0] == 'f';
}
but this program failed to link because "Foo::name" wasn't found:
struct Foo {
constexpr static const char *name = "foo";
};
int main() {
Foo f;
return f.name[0] == 'f';
}
This commit ensures that constant static variables referenced through member
expressions are emitted in the same way as ordinary static variable references.
rdar://33942261
Differential Revision: https://reviews.llvm.org/D36876
llvm-svn: 311772
of class fails to map class static variable.
If the global variable is captured and it has several redeclarations,
sometimes it may lead to a compiler crash. Patch fixes this by working
only with canonical declarations.
llvm-svn: 311479
If worksharing construct has at least one linear item, an implicit
synchronization point must be emitted to avoid possible conflict with
the loading/storing values to the original variables. Added implicit
barrier if the linear item is found before actual start of the
worksharing construct.
llvm-svn: 311013
We don't need special handling in CodeGenFunction::GenerateCode for
lambda block pointer conversion operators anymore. The conversion
operator emission code immediately calls back to the generic
EmitFunctionBody.
Rename EmitLambdaStaticInvokeFunction to EmitLambdaStaticInvokeBody for
better consistency with the other Emit*Body methods.
I'm preparing to do something about PR28299, which touches this code.
llvm-svn: 310145
On some targets, passing zero to the clz() or ctz() builtins has undefined
behavior. I ran into this issue while debugging UB in __hash_table from libcxx:
the bug I was seeing manifested itself differently under -O0 vs -Os, due to a
UB call to clz() (see: libcxx/r304617).
This patch introduces a check which can detect UB calls to builtins.
llvm.org/PR26979
Differential Revision: https://reviews.llvm.org/D34590
llvm-svn: 309459
When an omp for loop is canceled the constructed objects are being destructed
twice.
It looks like the desired code is:
{
Obj o;
If (cancelled) branch-through-cleanups to cancel.exit.
}
[cleanups]
cancel.exit:
__kmpc_for_static_fini
br cancel.cont (*)
cancel.cont:
__kmpc_barrier
return
The problem seems to be the branch to cancel.cont is currently also going
through the cleanups calling them again. This change just does a direct branch
instead.
Patch By: michael.p.rice@intel.com
Differential Revision: https://reviews.llvm.org/D35854
llvm-svn: 309288
The initializer for a static local variable cannot be hot, because it runs at
most once per program. That's not quite the same thing as having a low branch
probability, but under the assumption that the function is invoked many times,
modeling this as a branch probability seems reasonable.
For TLS variables, the situation is less clear, since the initialization side
of the branch can run multiple times in a program execution, but we still
expect initialization to be rare relative to non-initialization uses. It would
seem worthwhile to add a PGO counter along this path to make this estimation
more accurate in future.
For globals with guarded initialization, we don't yet apply any branch weights.
Due to our use of COMDATs, the guard will be reached exactly once per DSO, but
we have no idea how many DSOs will define the variable.
llvm-svn: 309195
The pointer overflow check gives false negatives when dealing with
expressions in which an unsigned value is subtracted from a pointer.
This is summarized in PR33430 [1]: ubsan permits the result of the
subtraction to be greater than "p", but it should not.
To fix the issue, we should track whether or not the pointer expression
is a subtraction. If it is, and the indices are unsigned, we know to
expect "p - <unsigned> <= p".
I've tested this by running check-{llvm,clang} with a stage2
ubsan-enabled build. I've also added some tests to compiler-rt, which
are in D34122.
[1] https://bugs.llvm.org/show_bug.cgi?id=33430
Differential Revision: https://reviews.llvm.org/D34121
llvm-svn: 307955
devirtualized.
The code to detect devirtualized calls is already in IRGen, so move the
code to lib/AST and make it a shared utility between Sema and IRGen.
This commit fixes a linkage error I was seeing when compiling the
following code:
$ cat test1.cpp
struct Base {
virtual void operator()() {}
};
template<class T>
struct Derived final : Base {
void operator()() override {}
};
Derived<int> *d;
int main() {
if (d)
(*d)();
return 0;
}
rdar://problem/33195657
Differential Revision: https://reviews.llvm.org/D34301
llvm-svn: 307883
Certain targets (e.g. amdgcn) require global variable to stay in global or constant address
space. In C or C++ global variables are emitted in the default (generic) address space.
This patch introduces virtual functions TargetCodeGenInfo::getGlobalVarAddressSpace
and TargetInfo::getConstantAddressSpace to handle this in a general approach.
It only affects IR generated for amdgcn target.
Differential Revision: https://reviews.llvm.org/D33842
llvm-svn: 307470
This patch makes ubsan's nonnull return value diagnostics more precise,
which makes the diagnostics more useful when there are multiple return
statements in a function. Example:
1 |__attribute__((returns_nonnull)) char *foo() {
2 | if (...) {
3 | return expr_which_might_evaluate_to_null();
4 | } else {
5 | return another_expr_which_might_evaluate_to_null();
6 | }
7 |} // <- The current diagnostic always points here!
runtime error: Null returned from Line 7, Column 2!
With this patch, the diagnostic would point to either Line 3, Column 5
or Line 5, Column 5.
This is done by emitting source location metadata for each return
statement in a sanitized function. The runtime is passed a pointer to
the appropriate metadata so that it can prepare and deduplicate reports.
Compiler-rt patch (with more tests): https://reviews.llvm.org/D34298
Differential Revision: https://reviews.llvm.org/D34299
llvm-svn: 306163
In C++ all variables are in default address space. Previously change has been
made to cast automatic variables to default address space. However that is
not sufficient since all temporary variables need to be casted to default
address space.
This patch casts all temporary variables to default address space except those
for passing indirect arguments since they are only used for load/store.
This patch only affects target having non-zero alloca address space.
Differential Revision: https://reviews.llvm.org/D33706
llvm-svn: 305711
Summary:
The title says it all.
Reviewers: GorNishanov, rsmith
Reviewed By: GorNishanov
Subscribers: rjmccall, cfe-commits
Differential Revision: https://reviews.llvm.org/D34194
llvm-svn: 305496
Adding an unsigned offset to a base pointer has undefined behavior if
the result of the expression would precede the base. An example from
@regehr:
int foo(char *p, unsigned offset) {
return p + offset >= p; // This may be optimized to '1'.
}
foo(p, -1); // UB.
This patch extends the pointer overflow check in ubsan to detect invalid
unsigned pointer index expressions. It changes the instrumentation to
only permit non-negative offsets in pointer index expressions when all
of the GEP indices are unsigned.
Testing: check-llvm, check-clang run on a stage2, ubsan-instrumented
build.
Differential Revision: https://reviews.llvm.org/D33910
llvm-svn: 305216
Check pointer arithmetic for overflow.
For some more background on this check, see:
https://wdtz.org/catching-pointer-overflow-bugs.htmlhttps://reviews.llvm.org/D20322
Patch by Will Dietz and John Regehr!
This version of the patch is different from the original in a few ways:
- It introduces the EmitCheckedInBoundsGEP utility which inserts
checks when the pointer overflow check is enabled.
- It does some constant-folding to reduce instrumentation overhead.
- It does not check some GEPs in CGExprCXX. I'm not sure that
inserting checks here, or in CGClass, would catch many bugs.
Possible future directions for this check:
- Introduce CGF.EmitCheckedStructGEP, to detect overflows when
accessing structures.
Testing: Apart from the added lit test, I ran check-llvm and check-clang
with a stage2, ubsan-instrumented clang. Will and John have also done
extensive testing on numerous open source projects.
Differential Revision: https://reviews.llvm.org/D33305
llvm-svn: 304459
The functions creating LValues propagated information about alignment
source. Extend the propagated data to also include information about
possible unrestricted aliasing. A new class LValueBaseInfo will
contain both AlignmentSource and MayAlias info.
This patch should not introduce any functional changes.
Differential Revision: https://reviews.llvm.org/D33284
llvm-svn: 303358
creation that are const-qualified.
When a block captures an ObjC object pointer, clang retains the pointer
to prevent prematurely destroying the object the pointer points to
before the block is called or copied.
When the captured object pointer is const-qualified, we can avoid
emitting the retain/release pair since the pointer variable cannot be
modified in the scope in which the block literal is introduced.
For example:
void test(const id x) {
callee(^{ (void)x; });
}
This patch implements that optimization.
rdar://problem/28894510
Differential Revision: https://reviews.llvm.org/D32601
llvm-svn: 301667
[OpenMP] Initial implementation of code generation for pragma 'distribute parallel for' on host
https://reviews.llvm.org/D29508
This patch makes the following additions:
It abstracts away loop bound generation code from procedures associated with pragma 'for' and loops in general, in such a way that the same procedures can be used for 'distribute parallel for' without the need for a full re-implementation.
It implements code generation for 'distribute parallel for' and adds regression tests. It includes tests for clauses.
It is important to notice that most of the clauses are implemented as part of existing procedures. For instance, firstprivate is already implemented for 'distribute' and 'for' as separate pragmas. As the implementation of 'distribute parallel for' is based on the same procedures, then we automatically obtain implementation for such clauses without the need to add new code. However, this requires regression tests that verify correctness of produced code.
llvm-svn: 301340
https://reviews.llvm.org/D29508
This patch makes the following additions:
1. It abstracts away loop bound generation code from procedures associated with pragma 'for' and loops in general, in such a way that the same procedures can be used for 'distribute parallel for' without the need for a full re-implementation.
2. It implements code generation for 'distribute parallel for' and adds regression tests. It includes tests for clauses.
It is important to notice that most of the clauses are implemented as part of existing procedures. For instance, firstprivate is already implemented for 'distribute' and 'for' as separate pragmas. As the implementation of 'distribute parallel for' is based on the same procedures, then we automatically obtain implementation for such clauses without the need to add new code. However, this requires regression tests that verify correctness of produced code.
Looking forward to comments.
llvm-svn: 301223
This patch teaches ubsan to insert an alignment check for the 'this'
pointer at the start of each method/lambda. This allows clang to emit
significantly fewer alignment checks overall, because if 'this' is
aligned, so are its fields.
This is essentially the same thing r295515 does, but for the alignment
check instead of the null check. One difference is that we keep the
alignment checks on member expressions where the base is a DeclRefExpr.
There's an opportunity to diagnose unaligned accesses in this situation
(as pointed out by Eli, see PR32630).
Testing: check-clang, check-ubsan, and a stage2 ubsan build.
Along with the patch from D30285, this roughly halves the amount of
alignment checks we emit when compiling X86FastISel.cpp. Here are the
numbers from patched/unpatched clangs based on r298160.
------------------------------------------
| Setup | # of alignment checks |
------------------------------------------
| unpatched, -O0 | 24326 |
| patched, -O0 | 12717 | (-47.7%)
------------------------------------------
Differential Revision: https://reviews.llvm.org/D30283
llvm-svn: 300370
Previously __cfi_check was created in LTO optimization pipeline, which
means LLD has no way of knowing about the existence of this symbol
without rescanning the LTO output object. As a result, LLD fails to
export __cfi_check, even when given --export-dynamic-symbol flag.
llvm-svn: 299806
GCC has the alloc_align attribute, which is similar to assume_aligned, except the attribute's parameter is the index of the integer parameter that needs aligning to.
Differential Revision: https://reviews.llvm.org/D29599
llvm-svn: 299117
Details:
Emit suspend expression which roughly looks like:
auto && x = CommonExpr();
if (!x.await_ready()) {
llvm_coro_save();
x.await_suspend(...); (*)
llvm_coro_suspend(); (**)
}
x.await_resume();
where the result of the entire expression is the result of x.await_resume()
(*) If x.await_suspend return type is bool, it allows to veto a suspend:
if (x.await_suspend(...))
llvm_coro_suspend();
(**) llvm_coro_suspend() encodes three possible continuations as a switch instruction:
%where-to = call i8 @llvm.coro.suspend(...)
switch i8 %where-to, label %coro.ret [ ; jump to epilogue to suspend
i8 0, label %yield.ready ; go here when resumed
i8 1, label %yield.cleanup ; go here when destroyed
]
llvm-svn: 298784
This is a follow-up to r297700 (Add a nullability sanitizer).
It addresses some FIXME's re: using nullability-specific diagnostic
handlers from compiler-rt, now that the necessary handlers exist.
check-ubsan test updates to follow.
llvm-svn: 297750
Teach UBSan to detect when a value with the _Nonnull type annotation
assumes a null value. Call expressions, initializers, assignments, and
return statements are all checked.
Because _Nonnull does not affect IRGen, the new checks are disabled by
default. The new driver flags are:
-fsanitize=nullability-arg (_Nonnull violation in call)
-fsanitize=nullability-assign (_Nonnull violation in assignment)
-fsanitize=nullability-return (_Nonnull violation in return stmt)
-fsanitize=nullability (all of the above)
This patch builds on top of UBSan's existing support for detecting
violations of the nonnull attributes ('nonnull' and 'returns_nonnull'),
and relies on the compiler-rt support for those checks. Eventually we
will need to update the diagnostic messages in compiler-rt (there are
FIXME's for this, which will be addressed in a follow-up).
One point of note is that the nullability-return check is only allowed
to kick in if all arguments to the function satisfy their nullability
preconditions. This makes it necessary to emit some null checks in the
function body itself.
Testing: check-clang and check-ubsan. I also built some Apple ObjC
frameworks with an asserts-enabled compiler, and verified that we get
valid reports.
Differential Revision: https://reviews.llvm.org/D30762
llvm-svn: 297700
It's possible to load out-of-range values from bitfields backed by a
boolean or an enum. Check for UB loads from bitfields.
This is the motivating example:
struct S {
BOOL b : 1; // Signed ObjC BOOL.
};
S s;
s.b = 1; // This is actually stored as -1.
if (s.b == 1) // Evaluates to false, -1 != 1.
...
Changes since the original commit:
- Single-bit bools are a special case (see CGF::EmitFromMemory), and we
can't avoid dealing with them when loading from a bitfield. Don't try to
insert a check in this case.
Differential Revision: https://reviews.llvm.org/D30423
llvm-svn: 297389
It's possible to load out-of-range values from bitfields backed by a
boolean or an enum. Check for UB loads from bitfields.
This is the motivating example:
struct S {
BOOL b : 1; // Signed ObjC BOOL.
};
S s;
s.b = 1; // This is actually stored as -1.
if (s.b == 1) // Evaluates to false, -1 != 1.
...
Differential Revision: https://reviews.llvm.org/D30423
llvm-svn: 297298
Summary:
Because of the existence branches out of GNU statement expressions, it
is possible that emitting cleanups for a full expression may cause the
new insertion point to not be dominated by the result of the inner
expression. Consider this example:
struct Foo { Foo(); ~Foo(); int x; };
int g(Foo, int);
int f(bool cond) {
int n = g(Foo(), ({ if (cond) return 0; 42; }));
return n;
}
Before this change, result of the call to 'g' did not dominate its use
in the store to 'n'. The early return exit from the statement expression
branches to a shared cleanup block, which ends in a switch between the
fallthrough destination (the assignment to 'n') or the function exit
block.
This change solves the problem by spilling and reloading expression
evaluation results when any of the active cleanups have branches.
I audited the other call sites of enterFullExpression, and they don't
appear to keep and Values live across the site of the cleanup, except in
ARC code. I wasn't able to create a test case for ARC that exhibits this
problem, though.
Reviewers: rjmccall, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D30590
llvm-svn: 297084
Summary:
Added co_return statement emission.
Tweaked coro-alloc.cpp test to use co_return to trigger coroutine processing instead of co_await, since this change starts emitting the body of the coroutine and await expression handling has not been upstreamed yet.
Reviewers: rsmith, majnemer, EricWF, aaron.ballman
Reviewed By: rsmith
Subscribers: majnemer, llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D29979
llvm-svn: 297076
UBSan's nonnull argument check applies when a parameter has the
"nonnull" attribute. The check currently works for FunctionDecls, but
not for ObjCMethodDecls. This patch extends the check to work for ObjC.
Differential Revision: https://reviews.llvm.org/D30599
llvm-svn: 296996
2nd attempt: the first was in r296231, but it had a use after lifetime
bug.
Clang has logic to lower certain conditional expressions directly into llvm
select instructions. However, it does not emit the correct profile counter
increment as it does this: it emits an unconditional increment of the counter
for the 'then branch', even if the value selected is from the 'else branch'
(this is PR32019).
That means, given the following snippet, we would report that "0" is selected
twice, and that "1" is never selected:
int f1(int x) {
return x ? 0 : 1;
^2 ^0
}
f1(0);
f1(1);
Fix the problem by using the instrprof_increment_step intrinsic to do the
proper increment.
llvm-svn: 296245
Clang has logic to lower certain conditional expressions directly into
llvm select instructions. However, it does not emit the correct profile
counter increment as it does this: it emits an unconditional increment
of the counter for the 'then branch', even if the value selected is from
the 'else branch' (this is PR32019).
That means, given the following snippet, we would report that "0" is
selected twice, and that "1" is never selected:
int f1(int x) {
return x ? 0 : 1;
^2 ^0
}
f1(0);
f1(1);
Fix the problem by using the instrprof_increment_step intrinsic to do
the proper increment.
llvm-svn: 296231
Fix the fact that we don't assign profile counters to constructors in
classes with virtual bases, or constructors with variadic parameters.
Differential Revision: https://reviews.llvm.org/D30131
llvm-svn: 296062
This fixes an assertion failure in cases where we had expression
statements that declared variables nested inside of pass_object_size
args. Since we were emitting the same ExprStmt twice (once for the arg,
once for the @llvm.objectsize call), we were getting issues with
redefining locals.
This also means that we can be more lax about when we emit
@llvm.objectsize for pass_object_size args: since we're reusing the
arg's value itself, we don't have to care so much about side-effects.
llvm-svn: 295935
This patch teaches ubsan to insert exactly one null check for the 'this'
pointer per method/lambda.
Previously, given a load of a member variable from an instance method
('this->x'), ubsan would insert a null check for 'this', and another
null check for '&this->x', before allowing the load to occur.
Similarly, given a call to a method from another method bound to the
same instance ('this->foo()'), ubsan would a redundant null check for
'this'. There is also a redundant null check in the case where the
object pointer is a reference ('Ref.foo()').
This patch teaches ubsan to remove the redundant null checks identified
above.
Testing: check-clang, check-ubsan, and a stage2 ubsan build.
I also compiled X86FastISel.cpp with -fsanitize=null using
patched/unpatched clangs based on r293572. Here are the number of null
checks emitted:
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 21767 |
| patched, -O0 | 10758 |
-------------------------------------
Changes since the initial commit:
- Don't introduce any unintentional object-size or alignment checks.
- Don't rely on IRGen of C labels in the test.
Differential Revision: https://reviews.llvm.org/D29530
llvm-svn: 295515
CodeGenFunction::EmitTypeCheck accepts a bool flag which controls
whether or not null checks are emitted. Make this a bit more flexible by
changing the bool to a SanitizerSet.
Needed for an upcoming change which deals with a scenario in which we
only want to emit null checks.
llvm-svn: 295514
This reverts commit r295401. It breaks the ubsan self-host. It inserts
object size checks once per C++ method which fire when the structure is
empty.
llvm-svn: 295494
This patch teaches ubsan to insert exactly one null check for the 'this'
pointer per method/lambda.
Previously, given a load of a member variable from an instance method
('this->x'), ubsan would insert a null check for 'this', and another
null check for '&this->x', before allowing the load to occur.
Similarly, given a call to a method from another method bound to the
same instance ('this->foo()'), ubsan would a redundant null check for
'this'. There is also a redundant null check in the case where the
object pointer is a reference ('Ref.foo()').
This patch teaches ubsan to remove the redundant null checks identified
above.
Testing: check-clang and check-ubsan. I also compiled X86FastISel.cpp
with -fsanitize=null using patched/unpatched clangs based on r293572.
Here are the number of null checks emitted:
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 21767 |
| patched, -O0 | 10758 |
-------------------------------------
Changes since the initial commit: don't rely on IRGen of C labels in the
test.
Differential Revision: https://reviews.llvm.org/D29530
llvm-svn: 295401
This patch teaches ubsan to insert exactly one null check for the 'this'
pointer per method/lambda.
Previously, given a load of a member variable from an instance method
('this->x'), ubsan would insert a null check for 'this', and another
null check for '&this->x', before allowing the load to occur.
Similarly, given a call to a method from another method bound to the
same instance ('this->foo()'), ubsan would a redundant null check for
'this'. There is also a redundant null check in the case where the
object pointer is a reference ('Ref.foo()').
This patch teaches ubsan to remove the redundant null checks identified
above.
Testing: check-clang and check-ubsan. I also compiled X86FastISel.cpp
with -fsanitize=null using patched/unpatched clangs based on r293572.
Here are the number of null checks emitted:
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 21767 |
| patched, -O0 | 10758 |
-------------------------------------
Differential Revision: https://reviews.llvm.org/D29530
llvm-svn: 295391
This patch implements codegen for the reduction clause on
any parallel construct for elementary data types. An efficient
implementation requires hierarchical reduction within a
warp and a threadblock. It is complicated by the fact that
variables declared in the stack of a CUDA thread cannot be
shared with other threads.
The patch creates a struct to hold reduction variables and
a number of helper functions. The OpenMP runtime on the GPU
implements reduction algorithms that uses these helper
functions to perform reductions within a team. Variables are
shared between CUDA threads using shuffle intrinsics.
An implementation of reductions on the NVPTX device is
substantially different to that of CPUs. However, this patch
is written so that there are minimal changes to the rest of
OpenMP codegen.
The implemented design allows the compiler and runtime to be
decoupled, i.e., the runtime does not need to know of the
reduction operation(s), the type of the reduction variable(s),
or the number of reductions. The design also allows reuse of
host codegen, with appropriate specialization for the NVPTX
device.
While the patch does introduce a number of abstractions, the
expected use case calls for inlining of the GPU OpenMP runtime.
After inlining and optimizations in LLVM, these abstractions
are unwound and performance of OpenMP reductions is comparable
to CUDA-canonical code.
Patch by Tian Jin in collaboration with Arpith Jacob
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29758
llvm-svn: 295333
This patch implements codegen for the reduction clause on
any parallel construct for elementary data types. An efficient
implementation requires hierarchical reduction within a
warp and a threadblock. It is complicated by the fact that
variables declared in the stack of a CUDA thread cannot be
shared with other threads.
The patch creates a struct to hold reduction variables and
a number of helper functions. The OpenMP runtime on the GPU
implements reduction algorithms that uses these helper
functions to perform reductions within a team. Variables are
shared between CUDA threads using shuffle intrinsics.
An implementation of reductions on the NVPTX device is
substantially different to that of CPUs. However, this patch
is written so that there are minimal changes to the rest of
OpenMP codegen.
The implemented design allows the compiler and runtime to be
decoupled, i.e., the runtime does not need to know of the
reduction operation(s), the type of the reduction variable(s),
or the number of reductions. The design also allows reuse of
host codegen, with appropriate specialization for the NVPTX
device.
While the patch does introduce a number of abstractions, the
expected use case calls for inlining of the GPU OpenMP runtime.
After inlining and optimizations in LLVM, these abstractions
are unwound and performance of OpenMP reductions is comparable
to CUDA-canonical code.
Patch by Tian Jin in collaboration with Arpith Jacob
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29758
llvm-svn: 295319
Support for CUDA printf is exploited to support printf for
an NVPTX OpenMP device.
To reflect the support of both programming models, the file
CGCUDABuiltin.cpp has been renamed to CGGPUBuiltin.cpp, and
the call EmitCUDADevicePrintfCallExpr has been renamed to
EmitGPUDevicePrintfCallExpr.
Reviewers: jlebar
Differential Revision: https://reviews.llvm.org/D17890
llvm-svn: 293444
in the current lexical scope.
clang currently emits the lifetime.start marker of a variable when the
variable comes into scope even though a variable's lifetime starts at
the entry of the block with which it is associated, according to the C
standard. This normally doesn't cause any problems, but in the rare case
where a goto jumps backwards past the variable declaration to an earlier
point in the block (see the test case added to lifetime2.c), it can
cause mis-compilation.
To prevent such mis-compiles, this commit conservatively disables
emitting lifetime variables when a label has been seen in the current
block.
This problem was discussed on cfe-dev here:
http://lists.llvm.org/pipermail/cfe-dev/2016-July/050066.html
rdar://problem/30153946
Differential Revision: https://reviews.llvm.org/D27680
llvm-svn: 293106
This patch adds support for codegen of 'target teams' on the host.
This combined directive has two captured statements, one for the
'teams' region, and the other for the 'parallel'.
This target teams region is offloaded using the __tgt_target_teams()
call. The patch sets the number of teams as an argument to
this call.
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29084
llvm-svn: 293005
This patch adds support for codegen of 'target teams' on the host.
This combined directive has two captured statements, one for the
'teams' region, and the other for the 'parallel'.
This target teams region is offloaded using the __tgt_target_teams()
call. The patch sets the number of teams as an argument to
this call.
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29084
llvm-svn: 293001
This patch adds support for codegen of 'target parallel' on the host.
It is also the first combined directive that requires two or more
captured statements. Support for this functionality is included in
the patch.
A combined directive such as 'target parallel' has two captured
statements, one for the 'target' and the other for the 'parallel'
region. Two captured statements are required because each has
different implicit parameters (see SemaOpenMP.cpp). For example,
the 'parallel' has 'global_tid' and 'bound_tid' while the 'target'
does not. The patch adds support for handling multiple captured
statements based on the combined directive.
When codegen'ing the 'target parallel' directive, the 'target'
outlined function is created using the outer captured statement
and the 'parallel' outlined function is created using the inner
captured statement.
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D28753
llvm-svn: 292419
This patch adds support for codegen of 'target parallel' on the host.
It is also the first combined directive that requires two or more
captured statements. Support for this functionality is included in
the patch.
A combined directive such as 'target parallel' has two captured
statements, one for the 'target' and the other for the 'parallel'
region. Two captured statements are required because each has
different implicit parameters (see SemaOpenMP.cpp). For example,
the 'parallel' has 'global_tid' and 'bound_tid' while the 'target'
does not. The patch adds support for handling multiple captured
statements based on the combined directive.
When codegen'ing the 'target parallel' directive, the 'target'
outlined function is created using the outer captured statement
and the 'parallel' outlined function is created using the inner
captured statement.
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D28753
llvm-svn: 292374
This patch refactors code that calls codegen for target regions. Currently
the codebase only supports the 'target' directive. The patch pulls out
common target processing code into a static function that can be called
by codegen for any target directive.
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D28752
llvm-svn: 292134
This patch is to implement sema and parsing for 'target teams distribute simd’ pragma.
Differential Revision: https://reviews.llvm.org/D28252
llvm-svn: 291579
Summary:
This patch makes the type_mismatch static data 7 bytes smaller (and it
ends up being 16 bytes smaller due to alignment restrictions, at least
on some x86-64 environments).
It revs up the type_mismatch handler version since we're breaking binary
compatibility. I will soon post a patch for the compiler-rt side.
Reviewers: rsmith, kcc, vitalybuka, pgousseau, gbedwell
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D28242
llvm-svn: 291236
The special case to widen the integer literal zero when passed to
variadic function calls should only apply to variadic functions, not
unprototyped functions. This is consistent with what MSVC does. In this
test case, MSVC uses a 4-byte store to pass the 5th argument to 'kr' and
an 8-byte store to pass the zero to 'v':
void v(int, ...);
void kr();
void f(void) {
v(1, 2, 3, 4, 0);
kr(1, 2, 3, 4, 0);
}
Aaron Ballman discovered this issue in https://reviews.llvm.org/D28166
llvm-svn: 290906
This patch is to implement sema and parsing for 'target teams distribute parallel for simd’ pragma.
Differential Revision: https://reviews.llvm.org/D28202
llvm-svn: 290862
This patch is to implement sema and parsing for 'target teams distribute parallel for’ pragma.
Differential Revision: https://reviews.llvm.org/D28160
llvm-svn: 290725
This patch is to implement sema and parsing for 'target teams distribute' pragma.
Differential Revision: https://reviews.llvm.org/D28015
llvm-svn: 290508
This is a recommit of r290149, which was reverted in r290169 due to msan
failures. msan was failing because we were calling
`isMostDerivedAnUnsizedArray` on an invalid designator, which caused us
to read uninitialized memory. To fix this, the logic of the caller of
said function was simplified, and we now have a `!Invalid` assert in
`isMostDerivedAnUnsizedArray`, so we can catch this particular bug more
easily in the future.
Fingers crossed that this patch sticks this time. :)
Original commit message:
This patch does three things:
- Gives us the alloc_size attribute in clang, which lets us infer the
number of bytes handed back to us by malloc/realloc/calloc/any user
functions that act in a similar manner.
- Teaches our constexpr evaluator that evaluating some `const` variables
is OK sometimes. This is why we have a change in
test/SemaCXX/constant-expression-cxx11.cpp and other seemingly
unrelated tests. Richard Smith okay'ed this idea some time ago in
person.
- Uniques some Blocks in CodeGen, which was reviewed separately at
D26410. Lack of uniquing only really shows up as a problem when
combined with our new eagerness in the face of const.
llvm-svn: 290297
This commit fails MSan when running test/CodeGen/object-size.c in
a confusing way. After some discussion with George, it isn't really
clear what is going on here. We can make the MSan failure go away by
testing for the invalid bit, but *why* things are invalid isn't clear.
And yet, other code in the surrounding area is doing precisely this and
testing for invalid.
George is going to take a closer look at this to better understand the
nature of the failure and recommit it, for now backing it out to clean
up MSan builds.
llvm-svn: 290169
This patch does three things:
- Gives us the alloc_size attribute in clang, which lets us infer the
number of bytes handed back to us by malloc/realloc/calloc/any user
functions that act in a similar manner.
- Teaches our constexpr evaluator that evaluating some `const` variables
is OK sometimes. This is why we have a change in
test/SemaCXX/constant-expression-cxx11.cpp and other seemingly
unrelated tests. Richard Smith okay'ed this idea some time ago in
person.
- Uniques some Blocks in CodeGen, which was reviewed separately at
D26410. Lack of uniquing only really shows up as a problem when
combined with our new eagerness in the face of const.
Differential Revision: https://reviews.llvm.org/D14274
llvm-svn: 290149
copy constructors of classes with array members, instead using
ArrayInitLoopExpr to represent the initialization loop.
This exposed a bug in the static analyzer where it was unable to differentiate
between zero-initialized and unknown array values, which has also been fixed
here.
llvm-svn: 289618
This adds a way for us to version any UBSan handler by itself.
The patch overrides D21289 for a better implementation (we're able to
rev up a single handler).
After this, then we can land a slight modification of D19667+D19668.
We probably don't want to keep all the versions in compiler-rt (maybe we
want to deprecate on one release and remove the old handler on the next
one?), but with this patch we will loudly fail to compile when mixing
incompatible handler calls, instead of silently compiling and then
providing bad error messages.
Reviewers: kcc, samsonov, rsmith, vsk
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D21695
llvm-svn: 289444
initialization of each array element:
* ArrayInitLoopExpr is a prvalue of array type with two subexpressions:
a common expression (an OpaqueValueExpr) that represents the up-front
computation of the source of the initialization, and a subexpression
representing a per-element initializer
* ArrayInitIndexExpr is a prvalue of type size_t representing the current
position in the loop
This will be used to replace the creation of explicit index variables in lambda
capture of arrays and copy/move construction of classes with array elements,
and also C++17 structured bindings of arrays by value (which inexplicably allow
copying an array by value, unlike all of C++'s other array declarations).
No uses of these nodes are introduced by this change, however.
llvm-svn: 289413
This patch is to implement sema and parsing for 'teams distribute parallel for' pragma.
Differential Revision: https://reviews.llvm.org/D27345
llvm-svn: 289179
This patch is to implement sema and parsing for 'teams distribute parallel for simd' pragma.
Differential Revision: https://reviews.llvm.org/D27084
llvm-svn: 288294
If 'omp cancel' construct is used in a worksharing construct it may
cause hanging of the software in case if reduction clause is used. Patch fixes this problem by avoiding extra reduction processing for branches that were canceled.
llvm-svn: 287227
Summary:
r286944 introduced bugs detected by ASAN as use-after-return.
r287025 have not fixed them completely.
This reverts commit r286944 and r287025.
Reviewers: ABataev
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D26720
llvm-svn: 287069
If 'omp cancel' construct is used in a worksharing construct it may cause
hanging of the software in case if reduction clause is used. Patch fixes
this problem by avoiding extra reduction processing for branches that
were canceled.
llvm-svn: 286944
can be used to improve the locations when generating remarks for loops.
Depends on the companion LLVM change r286227.
Patch by Florian Hahn.
Differential Revision: https://reviews.llvm.org/D25764
llvm-svn: 286456
abstract information about the callee. NFC.
The goal here is to make it easier to recognize indirect calls and
trigger additional logic in certain cases. That logic will come in
a later patch; in the meantime, I felt that this was a significant
improvement to the code.
llvm-svn: 285258
Summary:
Current generation of lifetime intrinsics does not handle cases like:
```
{
char x;
l1:
bar(&x, 1);
}
goto l1;
```
We will get code like this:
```
%x = alloca i8, align 1
call void @llvm.lifetime.start(i64 1, i8* nonnull %x)
br label %l1
l1:
%call = call i32 @bar(i8* nonnull %x, i32 1)
call void @llvm.lifetime.end(i64 1, i8* nonnull %x)
br label %l1
```
So the second time bar was called for x which is marked as dead.
Lifetime markers here are misleading so it's better to remove them at all.
This type of bypasses are rare, e.g. code detects just 8 functions building
clang (2329 targets).
PR28267
Reviewers: eugenis
Subscribers: beanz, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D24693
llvm-svn: 285176
Summary: D24693 will need access to it from other places
Reviewers: eugenis
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D24695
llvm-svn: 285158
constexpr variable.
When compiling a constexpr NSString initialized with an objective-c
string literal, CodeGen emits objc_storeStrong on an uninitialized
alloca, which causes a crash.
This patch folds the code in EmitScalarInit into EmitStoreThroughLValue
and fixes the crash by calling objc_retain on the string instead of
using objc_storeStrong.
rdar://problem/28562009
Differential Revision: https://reviews.llvm.org/D25547
llvm-svn: 284516
Summary: _BitScan intrinsics (and some others, for example _Interlocked and _bittest) are supposed to work on both ARM and x86. This is an attempt to isolate them, avoiding repeating their code or writing separate function for each builtin.
Reviewers: hans, thakis, rnk, majnemer
Subscribers: RKSimon, cfe-commits, aemerson
Differential Revision: https://reviews.llvm.org/D25264
llvm-svn: 284060
Summary:
With this commit simple coroutines can be created in plain C using coroutine builtins.
Reviewers: rnk, EricWF, rsmith
Subscribers: modocache, mgorny, mehdi_amini, beanz, cfe-commits
Differential Revision: https://reviews.llvm.org/D24373
llvm-svn: 283155
Instead of ignoring the evaluation order rule, ignore the "destroy parameters
in reverse construction order" rule for the small number of problematic cases.
This only causes incorrect behavior in the rare case where both parameters to
an overloaded operator <<, >>, ->*, &&, ||, or comma are of class type with
non-trivial destructor, and the program is depending on those parameters being
destroyed in reverse construction order.
We could do a little better here by reversing the order of parameter
destruction for those functions (and reversing the argument evaluation order
for all direct calls, not just those with operator syntax), but that is not a
complete solution to the problem, as the same situation can be reached by an
indirect function call.
Approach reviewed off-line by rnk.
llvm-svn: 282777
function correctly when targeting MS ABIs (this appears to have never mattered
prior to this change).
Update test case to always cover both 32-bit and 64-bit Windows ABIs, since
they behave somewhat differently from each other here.
Update test case to also cover operators , && and ||, which it appears are also
affected by P0145R3 (they're not explicitly called out by the design document,
but this is the emergent behavior of the existing wording).
Original commit message:
P0145R3 (C++17 evaluation order tweaks): evaluate the right-hand side of
assignment and compound-assignment operators before the left-hand side. (Even
if it's an overloaded operator.)
This completes the implementation of P0145R3 + P0400R0 for all targets except
Windows, where the evaluation order guarantees for <<, >>, and ->* are
unimplementable as the ABI requires the function arguments are evaluated from
right to left (because parameter destructors are run from left to right in the
callee).
llvm-svn: 282619
assignment and compound-assignment operators before the left-hand side. (Even
if it's an overloaded operator.)
This completes the implementation of P0145R3 + P0400R0 for all targets except
Windows, where the evaluation order guarantees for <<, >>, and ->* are
unimplementable as the ABI requires the function arguments are evaluated from
right to left (because parameter destructors are run from left to right in the
callee).
llvm-svn: 282556
* recurse through intermediate LabelStmts and AttributedStmts when checking
whether a statement inside a switch declares a variable
* if the end of a compound statement is reachable from the chosen case label,
and the compound statement contains a variable declaration, it's not valid
to just emit the contents of the compound statement -- we must emit the
statement itself or we lose the scope (and thus end lifetimes at the wrong
point)
llvm-svn: 281797
This reverts commit r279003 as it breaks some of our buildbots (e.g.
clang-cmake-aarch64-quick, clang-x86_64-linux-selfhost-modules).
The error is in OpenMP/teams_distribute_simd_ast_print.cpp:
clang: /home/buildslave/buildslave/clang-cmake-aarch64-quick/llvm/include/llvm/ADT/DenseMap.h:527:
bool llvm::DenseMapBase<DerivedT, KeyT, ValueT, KeyInfoT, BucketT>::LookupBucketFor(const LookupKeyT&, const BucketT*&) const
[with LookupKeyT = clang::Stmt*; DerivedT = llvm::DenseMap<clang::Stmt*, long unsigned int>;
KeyT = clang::Stmt*; ValueT = long unsigned int;
KeyInfoT = llvm::DenseMapInfo<clang::Stmt*>;
BucketT = llvm::detail::DenseMapPair<clang::Stmt*, long unsigned int>]:
Assertion `!KeyInfoT::isEqual(Val, EmptyKey) && !KeyInfoT::isEqual(Val, TombstoneKey) &&
"Empty/Tombstone value shouldn't be inserted into map!"' failed.
llvm-svn: 279045
This patch is to implement sema and parsing for 'teams distribute simd’ pragma.
This patch is originated by Carlo Bertolli.
Differential Revision: https://reviews.llvm.org/D23528
llvm-svn: 279003
Summary: This patch adds support for the use_device_ptr clause. It includes changes in SEMA that could not be tested without codegen, namely, the use of the first private logic and mappable expressions support.
Reviewers: hfinkel, carlo.bertolli, arpith-jacob, kkwli0, ABataev
Subscribers: caomhin, cfe-commits
Differential Revision: https://reviews.llvm.org/D22691
llvm-svn: 276977
This patch is to implement sema and parsing for 'target parallel for simd' pragma.
Differential Revision: http://reviews.llvm.org/D22096
llvm-svn: 275365
-fxray-instrument: enables XRay annotation of IR
-fxray-instruction-threshold: configures the threshold for function size (looking at IR instructions), and allow LLVM to decide whether to add the nop sleds later on in the process.
Also implements the related xray_always_instrument and xray_never_instrument function attributes.
Patch by Dean Michael Berris.
llvm-svn: 275330
Summary: This patch is an implementation of sema and parsing for the OpenMP composite pragma 'distribute simd'.
Differential Revision: http://reviews.llvm.org/D22007
llvm-svn: 274604
Summary: This patch is an implementation of sema and parsing for the OpenMP composite pragma 'distribute parallel for simd'.
Differential Revision: http://reviews.llvm.org/D21977
llvm-svn: 274530
With all MaterializeTemporaryExprs coming with a ExprWithCleanups, it's
easy to add correct lifetime.end marks into the right RunCleanupsScope.
Differential Revision: http://reviews.llvm.org/D20499
llvm-svn: 274385
Replace inheriting constructors implementation with new approach, voted into
C++ last year as a DR against C++11.
Instead of synthesizing a set of derived class constructors for each inherited
base class constructor, we make the constructors of the base class visible to
constructor lookup in the derived class, using the normal rules for
using-declarations.
For constructors, UsingShadowDecl now has a ConstructorUsingShadowDecl derived
class that tracks the requisite additional information. We create shadow
constructors (not found by name lookup) in the derived class to model the
actual initialization, and have a new expression node,
CXXInheritedCtorInitExpr, to model the initialization of a base class from such
a constructor. (This initialization is special because it performs real perfect
forwarding of arguments.)
In cases where argument forwarding is not possible (for inalloca calls,
variadic calls, and calls with callee parameter cleanup), the shadow inheriting
constructor is not emitted and instead we directly emit the initialization code
into the caller of the inherited constructor.
Note that this new model is not perfectly compatible with the old model in some
corner cases. In particular:
* if B inherits a private constructor from A, and C uses that constructor to
construct a B, then we previously required that A befriends B and B
befriends C, but the new rules require A to befriend C directly, and
* if a derived class has its own constructors (and so its implicit default
constructor is suppressed), it may still inherit a default constructor from
a base class
llvm-svn: 274049
[OpenMP] Initial implementation of parse and sema for composite pragma 'distribute parallel for'
This patch is an initial implementation for #distribute parallel for.
The main differences that affect other pragmas are:
The implementation of 'distribute parallel for' requires blocking of the associated loop, where blocks are "distributed" to different teams and iterations within each block are scheduled to parallel threads within each team. To implement blocking, sema creates two additional worksharing directive fields that are used to pass the team assigned block lower and upper bounds through the outlined function resulting from 'parallel'. In this way, scheduling for 'for' to threads can use those bounds.
As a consequence of blocking, the stride of 'distribute' is not 1 but it is equal to the blocking size. This is returned by the runtime and sema prepares a DistIncrExpr variable to hold that value.
As a consequence of blocking, the global upper bound (EnsureUpperBound) expression of the 'for' is not the original loop upper bound (e.g. in for(i = 0 ; i < N; i++) this is 'N') but it is the team-assigned block upper bound. Sema creates a new expression holding the calculation of the actual upper bound for 'for' as UB = min(UB, PrevUB), where UB is the loop upper bound, and PrevUB is the team-assigned block upper bound.
llvm-svn: 273884
http://reviews.llvm.org/D21564
This patch is an initial implementation for #distribute parallel for.
The main differences that affect other pragmas are:
The implementation of 'distribute parallel for' requires blocking of the associated loop, where blocks are "distributed" to different teams and iterations within each block are scheduled to parallel threads within each team. To implement blocking, sema creates two additional worksharing directive fields that are used to pass the team assigned block lower and upper bounds through the outlined function resulting from 'parallel'. In this way, scheduling for 'for' to threads can use those bounds.
As a consequence of blocking, the stride of 'distribute' is not 1 but it is equal to the blocking size. This is returned by the runtime and sema prepares a DistIncrExpr variable to hold that value.
As a consequence of blocking, the global upper bound (EnsureUpperBound) expression of the 'for' is not the original loop upper bound (e.g. in for(i = 0 ; i < N; i++) this is 'N') but it is the team-assigned block upper bound. Sema creates a new expression holding the calculation of the actual upper bound for 'for' as UB = min(UB, PrevUB), where UB is the loop upper bound, and PrevUB is the team-assigned block upper bound.
llvm-svn: 273705
Summary:
This patch fixes an issue detected when firstprivate variables are passed to an OpenMP outlined function vararg list. Currently they are not compatible with what the runtime library expects causing malfunction in some targets.
This patch fixes the issue by moving the casting logic already in place for offloading to the common code that creates the outline function and arguments and updates the regression tests accordingly.
Reviewers: hfinkel, arpith-jacob, carlo.bertolli, kkwli0, ABataev
Subscribers: cfe-commits, caomhin
Differential Revision: http://reviews.llvm.org/D21150
llvm-svn: 272900
Summary:
This patch is to add parsing and sema support for `target update` directive. Support for the `to` and `from` clauses will be added by a different patch. This patch also adds support for other clauses that are already implemented upstream and apply to `target update`, e.g. `device` and `if`.
This patch is based on the original post by Kelvin Li.
Reviewers: hfinkel, carlo.bertolli, kkwli0, arpith-jacob, ABataev
Subscribers: caomhin, cfe-commits
Differential Revision: http://reviews.llvm.org/D15944
llvm-svn: 270878
Underaligned atomic LValues require libcalls which MSVC doesn't have.
MSVC doesn't seem to consider such operations as requiring a barrier
anyway.
This fixes PR27843.
llvm-svn: 270576
For better performance and to unify code with offloading part we pass
scalar firstprivate values by value, instead of by reference. It will
remove some extra copying operations.
llvm-svn: 269751
schedule modifiers.
Runtime library expects some additional data in schedule argument for
loop-based directives, that have additional schedule modifiers
'monotonic|nonmonotonic'.
llvm-svn: 269035
Currently there is a problem with codegen of inlined directives inside
lambdas, it may cause a crash during codegen because of incorrect
capturing of variables. Patch fixes this problem.
llvm-svn: 267677
The taskloop construct specifies that the iterations of one or more associated loops will be executed in parallel using OpenMP tasks. The iterations are distributed across tasks created by the construct and scheduled to be executed.
The next code will be generated for the taskloop directive:
#pragma omp taskloop num_tasks(N) lastprivate(j)
for( i=0; i<N*GRAIN*STRIDE-1; i+=STRIDE ) {
int th = omp_get_thread_num();
#pragma omp atomic
counter++;
#pragma omp atomic
th_counter[th]++;
j = i;
}
Generated code:
task = __kmpc_omp_task_alloc(NULL,gtid,1,sizeof(struct
task),sizeof(struct shar),&task_entry);
psh = task->shareds;
psh->pth_counter = &th_counter;
psh->pcounter = &counter;
psh->pj = &j;
task->lb = 0;
task->ub = N*GRAIN*STRIDE-2;
task->st = STRIDE;
__kmpc_taskloop(
NULL, // location
gtid, // gtid
task, // task structure
1, // if clause value
&task->lb, // lower bound
&task->ub, // upper bound
STRIDE, // loop increment
0, // 1 if nogroup specified
2, // schedule type: 0-none, 1-grainsize, 2-num_tasks
N, // schedule value (ignored for type 0)
(void*)&__task_dup_entry // tasks duplication routine
);
llvm-svn: 267395
If loop control variable for simd-based directives is explicitly marked
as linear/lastprivate in clauses, codegen for such construct would
crash. Patch fixes this problem.
llvm-svn: 267101
Revert the two changes to thread CodeGenOptions into the TargetInfo allocation
and to fix the layering violation by moving CodeGenOptions into Basic.
Code Generation is arguably not particularly "basic". This addresses Richard's
post-commit review comments. This change purely does the mechanical revert and
will be followed up with an alternate approach to thread the desired information
into TargetInfo.
llvm-svn: 265806
This is a mechanical move of CodeGenOptions from libFrontend to libBasic. This
fixes the layering violation introduced earlier by threading CodeGenOptions into
TargetInfo. It should also fix the modules based self-hosting builds. NFC.
llvm-svn: 265702
Summary: See LLVM change D18775 for details, this change depends on it.
Reviewers: jyknight, reames
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18776
llvm-svn: 265569
Solution unifies interface of RegionCodeGenTy type to allow insert
runtime-specific code before/after main codegen action defined in
CGStmtOpenMP.cpp file. Runtime should not define its own RegionCodeGenTy
for general OpenMP directives, but must be allowed to insert its own
(required) code to support target specific codegen.
llvm-svn: 264700
Solution unifies interface of RegionCodeGenTy type to allow insert
runtime-specific code before/after main codegen action defined in
CGStmtOpenMP.cpp file. Runtime should not define its own RegionCodeGenTy
for general OpenMP directives, but must be allowed to insert its own
(required) code to support target specific codegen.
llvm-svn: 264576
Solution unifies interface of RegionCodeGenTy type to allow insert
runtime-specific code before/after main codegen action defined in
CGStmtOpenMP.cpp file. Runtime should not define its own RegionCodeGenTy
for general OpenMP directives, but must be allowed to insert its own
(required) code to support target specific codegen.
llvm-svn: 264569
This reverts commit r263607.
This change caused more objc_retain/objc_release calls in the IR but those
are then incorrectly optimized by the ARC optimizer. Work is going to have
to be done to ensure the ARC optimizer doesn't optimize user written RR, but
that should land before this change.
This change will also need to be updated to take account for any changes required
to ensure that user written calls to RR are distinct from those inserted by ARC.
llvm-svn: 263984
It is faster to directly call the ObjC runtime for methods such as retain/release instead of sending a message to those functions.
This patch adds support for converting messages to retain/release/alloc/autorelease to their equivalent runtime calls.
Tests included for the positive case of applying this transformation, negative tests that we ensure we only convert "alloc" to objc_alloc, not "alloc2", and also a driver test to ensure we enable this only for supported runtime versions.
Reviewed by John McCall.
Differential Revision: http://reviews.llvm.org/D14737
llvm-svn: 263607
OpenMP 4.5 allows privatization of non-static data members in OpenMP
constructs. Patch adds proper codegen support for data members in
'linear' clause
llvm-svn: 263003
This patch provide basic implementation of codegen for teams directive, excluding all clauses except dist_schedule. It also fixes parts of AST reader/writer to enable correct pre-compiled header handling.
http://reviews.llvm.org/D17170
llvm-svn: 262832
This patch provide basic implementation of codegen for teams directive, excluding all clauses except dist_schedule. It also fixes parts of AST reader/writer to enable correct pre-compiled header handling.
http://reviews.llvm.org/D17170
llvm-svn: 262741
We'd lose track of the parent CodeGenFunction, leading us to get
confused with regard to which function a nested finally belonged to.
Differential Revision: http://reviews.llvm.org/D17752
llvm-svn: 262379
This patch introduces the -fwhole-program-vtables flag, which enables the
whole-program vtable optimization feature (D16795) in Clang.
Differential Revision: http://reviews.llvm.org/D16821
llvm-svn: 261767
Expressions inside 'schedule'|'dist_schedule' clause must be captured in
combined directives to avoid possible crash during codegen. Patch
improves handling of such constructs
llvm-svn: 260954
This patch changes cc1 option -fprofile-instr-generate to an enum option
-fprofile-instrument={clang|none}. It also changes cc1 options
-fprofile-instr-generate= to -fprofile-instrument-path=.
The driver level option -fprofile-instr-generate and -fprofile-instr-generate=
remain intact. This change will pave the way to integrate new PGO
instrumentation in IR level.
Review: http://reviews.llvm.org/D16730
llvm-svn: 259811
Codegen for array sections/array subscripts worked only for expressions with arrays as base. Patch fixes codegen for bases with pointer/reference types.
llvm-svn: 259776
Summary:
This patch adds parsing + sema for the target parallel for directive along with testcases.
Reviewers: ABataev
Differential Revision: http://reviews.llvm.org/D16759
llvm-svn: 259654
reclaiming a call result in order to ignore it or assign it
to an __unsafe_unretained variable. This avoids adding
an unwanted retain/release pair when the return value is
not actually returned autoreleased (e.g. when it is returned
from a nonatomic getter or a typical collection accessor).
This runtime function is only available on the latest Apple
OS releases; the backwards-compatibility story is that you
don't get the optimization unless your deployment target is
recent enough. Sorry.
rdar://20530049
llvm-svn: 258962
Summary:
This patch adds parsing + sema for the target parallel directive and its clauses along with testcases.
Reviewers: ABataev
Differential Revision: http://reviews.llvm.org/D16553
Rebased to current trunk and updated test cases.
llvm-svn: 258832
* Runtime diagnostic data for cfi-icall changed to match the rest of
cfi checks
* Layout of all CFI diagnostic data changed to put Kind at the
beginning. There is no ABI stability promise yet.
* Call cfi_slowpath_diag instead of cfi_slowpath when needed.
* Emit __cfi_check_fail function, which dispatches a CFI check
faliure according to trap/recover settings of the current module.
* A tiny driver change to match the way the new handlers are done in
compiler-rt.
llvm-svn: 258745
This is part of a new statistics gathering feature for the sanitizers.
See clang/docs/SanitizerStats.rst for further info and docs.
Differential Revision: http://reviews.llvm.org/D16175
llvm-svn: 257971
Clang-side cross-DSO CFI.
* Adds a command line flag -f[no-]sanitize-cfi-cross-dso.
* Links a runtime library when enabled.
* Emits __cfi_slowpath calls is bitset test fails.
* Emits extra hash-based bitsets for external CFI checks.
* Sets a module flag to enable __cfi_check generation during LTO.
This mode does not yet support diagnostics.
llvm-svn: 255694
`pass_object_size` is our way of enabling `__builtin_object_size` to
produce high quality results without requiring inlining to happen
everywhere.
A link to the design doc for this attribute is available at the
Differential review link below.
Differential Revision: http://reviews.llvm.org/D13263
llvm-svn: 254554
Summary:
This patch implements the 4.5 specification for the implicit data maps. OpenMP 4.5 specification changes the default way data is captured into a target region. All the non-aggregate kinds are passed by value by default. This required activating the capturing by value during SEMA for the target region. All the non-aggregate values that can be encoded in the size of a pointer are properly casted and forwarded to the runtime library. On top of fixing the previous weird behavior for mapping pointers in nested data regions (an explicit map was always required), this also improves performance as the number of allocations/transactions to the device per non-aggregate map are reduced from two to only one - instead of passing a reference and the value, only the value passed.
Explicit maps will be added later on once firstprivate, private, and map clauses' SEMA and parsing are available.
Reviewers: hfinkel, rjmccall, ABataev
Subscribers: cfe-commits, carlo.bertolli
Differential Revision: http://reviews.llvm.org/D14940
llvm-svn: 254521
This patch changes the generation of CGFunctionInfo to contain
the FunctionProtoType if it is available. This enables the code
generation for call instructions to look into this type for
exception information and therefore generate better quality
IR - it will not create invoke instructions for functions that
are know not to throw.
llvm-svn: 253926
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.
This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.
llvm-svn: 252507
This works around PR25162. The MSVC tables make it very difficult to
correctly inline a C++ destructor that contains try / catch. We've
attempted to address PR25162 in LLVM's backend, but it feels pretty
infeasible. MSVC and ICC both appear to avoid inlining such complex
destructors.
Long term, we want to fix this by making the inliner smart enough to
know when it is inlining into a cleanup, so it can inline simple
destructors (~unique_ptr and ~vector) while avoiding destructors
containing try / catch.
llvm-svn: 251576
match the feature set of the function that they're being called from.
This ensures that we can effectively diagnose some[1] code that would
instead ICE in the backend with a failure to select message.
Example:
__m128d foo(__m128d a, __m128d b) {
return __builtin_ia32_addsubps(b, a);
}
compiled for normal x86_64 via:
clang -target x86_64-linux-gnu -c
would fail to compile in the back end because the normal subtarget
features for x86_64 only include sse2 and the builtin requires sse3.
[1] We're still not erroring on:
__m128i bar(__m128i const *p) { return _mm_lddqu_si128(p); }
where we should fail and error on an always_inline function being
inlined into a function that doesn't support the subtarget features
required.
llvm-svn: 250473
This patch implements the outlining for offloading functions for code
annotated with the OpenMP target directive. It uses a temporary naming
of the outlined functions that will have to be updated later on once
target side codegen and registration of offloading libraries is
implemented - the naming needs to be made unique in the produced
library.
llvm-svn: 249148
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
It seems that there is small bug, and we can't generate assume loads
when some virtual functions have internal visibiliy
This reverts commit 982bb7d966947812d216489b3c519c9825cacbf2.
llvm-svn: 247332
Currently all variables used in OpenMP regions are captured into a record and passed to outlined functions in this record. It may result in some poor performance because of too complex analysis later in optimization passes. Patch makes to emit outlined functions for parallel-based regions with a list of captured variables. It reduces code for 2*n GEPs, stores and loads at least.
Codegen for task-based regions remains unchanged because runtime requires that all captured variables are passed in captured record.
llvm-svn: 247251
Generating call assume(icmp %vtable, %global_vtable) after constructor
call for devirtualization purposes.
For more info go to:
http://lists.llvm.org/pipermail/cfe-dev/2015-July/044227.html
Edit:
Fixed version because of PR24479.
After this patch got reverted because of ScalarEvolution bug (D12719)
Merged after John McCall big patch (Added Address).
http://reviews.llvm.org/D11859
llvm-svn: 247199
Summary:
Currently clang provides no general way to generate nontemporal loads/stores.
There are some architecture specific builtins for doing so (e.g. in x86), but
there is no way to generate non-temporal store on, e.g. AArch64. This patch adds
generic builtins which are expanded to a simple store with '!nontemporal'
attribute in IR.
Differential Revision: http://reviews.llvm.org/D12313
llvm-svn: 247104
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
Fix processing of shared variables with reference types in OpenMP constructs. Previously, if the variable was not marked in one of the private clauses, the reference to this variable was emitted incorrectly and caused an assertion later.
llvm-svn: 246846
This implements basic support for compiling (though not yet assembling
or linking) for a WebAssembly target. Note that ABI details are not yet
finalized, and may change.
Differential Revision: http://reviews.llvm.org/D12002
llvm-svn: 246814
Added codegen for array section in 'depend' clause of 'task' directive. It emits to pointers, one for the begin of array section and another for the end of array section. Size of the section is calculated as (end + 1 - start) * sizeof(basic_element_type).
llvm-svn: 246422
Added codegen for array section in 'depend' clause of 'task' directive. It emits to pointers, one for the begin of array section and another for the end of array section. Size of the section is calculated as (end + 1 - start) * sizeof(basic_element_type).
llvm-svn: 246278