Commit Graph

1229 Commits

Author SHA1 Message Date
Ivan A. Kosarev 0e528202b8 [CodeGen] EmitPointerWithAlignment() to generate TBAA info along with LValue base info
Differential Revision: https://reviews.llvm.org/D38796

llvm-svn: 315731
2017-10-13 18:40:18 +00:00
Ivan A. Kosarev 78f486d136 [CodeGen] getNaturalTypeAlignment() to generate TBAA info along with LValue base info
This patch should not bring in any functional changes.

Differential Revision: https://reviews.llvm.org/D38794

llvm-svn: 315708
2017-10-13 16:58:30 +00:00
Ivan A. Kosarev 1590fd3aa8 [CodeGen] EmitLoadOfReference() to generate TBAA info along with LValue base info
This patch should not bring in any functional changes.

Differential Revision: https://reviews.llvm.org/D38793

llvm-svn: 315705
2017-10-13 16:50:50 +00:00
Ivan A. Kosarev 9029564e8c [CodeGen] EmitLoadOfPointer() to generate TBAA info along with LValue base info
This patch should not bring in any functional changes.

Differential Revision: https://reviews.llvm.org/D38791

llvm-svn: 315704
2017-10-13 16:47:22 +00:00
Ivan A. Kosarev 229a6d8d17 [CodeGen] EmitCXXMemberDataPointerAddress() to generate TBAA info along with LValue base info
This patch should not bring in any functional changes.

Differential Revision: https://reviews.llvm.org/D38788

llvm-svn: 315702
2017-10-13 16:38:32 +00:00
Ivan A. Kosarev f5f204679b [CodeGen] Generate TBAA info along with LValue base info
This patch enables explicit generation of TBAA information in all
cases where LValue base info is propagated or constructed in
non-trivial ways. Eventually, we will consider each of these
cases to make sure the TBAA information is correct and not too
conservative. For now, we just fall back to generating TBAA info
from the access type.

This patch should not bring in any functional changes.

This is part of D38126 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D38733

llvm-svn: 315575
2017-10-12 11:29:46 +00:00
Ivan A. Kosarev 5f8c0ca53d [CodeGen] Do not construct complete LValue base info in trivial cases
Besides obvious code simplification, avoiding explicit creation
of LValueBaseInfo objects makes it easier to make TBAA
information to be part of such objects.

This is part of D38126 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D38695

llvm-svn: 315289
2017-10-10 09:39:32 +00:00
Erich Keane 1fe643a6d7 Split X86::BI__builtin_cpu_init handling into own function[NFC]
The Cpu Init functionality is required for the target
attribute, so this patch simply splits it out into its own
function, exactly like CpuIs and CpuSupports.

llvm-svn: 315075
2017-10-06 16:40:45 +00:00
Ivan A. Kosarev 383890bad4 Refine generation of TBAA information in clang
This patch is an attempt to clarify and simplify generation and
propagation of TBAA information. The idea is to pack all values
that describe a memory access, namely, base type, access type and
offset, into a single structure. This is supposed to make further
changes, such as adding support for unions and array members,
easier to prepare and review.

DecorateInstructionWithTBAA() is no more responsible for
converting types to tags. These implicit conversions not only
complicate reading the code, but also suggest assigning scalar
access tags while we generally prefer full-size struct-path tags.

TBAAPathTag is replaced with TBAAAccessInfo; the latter is now
the type of the keys of the cache map that translates access
descriptors to metadata nodes.

Fixed a bug with writing to a wrong map in
getTBAABaseTypeMetadata() (former getTBAAStructTypeInfo()).

We now check for valid base access types every time we
dereference a field. The original code only checks the top-level
base type. See isValidBaseType() / isTBAAPathStruct() calls.

Some entities have been renamed to sound more adequate and less
confusing/misleading in presence of path-aware TBAA information.

Now we do not lookup twice for the same cache entry in
getAccessTagInfo().

Refined relevant comments and descriptions.

Differential Revision: https://reviews.llvm.org/D37826

llvm-svn: 315048
2017-10-06 08:17:48 +00:00
Akira Hatanaka 6b103bc18c [CodeGen] Emit a helper function for __builtin_os_log_format to reduce
code size.

Currently clang expands a call to __builtin_os_log_format into a long
sequence of instructions at the call site, causing code size to
increase in some cases.

This commit attempts to reduce code size by emitting a helper function
that can be shared by calls to __builtin_os_log_format with similar
formats and arguments. The helper function has linkonce_odr linkage to
enable the linker to merge identical functions across translation units.
Attribute 'noinline' is attached to the helper function at -Oz so that
the inliner doesn't inline functions that can potentially be merged.

This commit also fixes a bug where the generated IR writes past the end
of the buffer when "%m" is the last specifier appearing in the format
string passed to __builtin_os_log_format.

Original patch by Duncan Exon Smith.

rdar://problem/34065973
rdar://problem/34196543

Differential Revision: https://reviews.llvm.org/D38606

llvm-svn: 315045
2017-10-06 07:12:46 +00:00
Ivan A. Kosarev afc074cc41 Revert r314977 "[CodeGen] Unify generation of scalar and struct-path TBAA tags"
D37826 has been mistakenly committed where it should be the patch from D38503.

Differential Revision: https://reviews.llvm.org/D38503

llvm-svn: 314978
2017-10-05 11:05:43 +00:00
Ivan A. Kosarev 6fa20cfea3 [CodeGen] Unify generation of scalar and struct-path TBAA tags
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.

Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
  of types to access tags anymore. Instead, it takes an access
  descriptor (TBAAAccessInfo) and generates corresponding access
  tag from it.
* getTBAAInfoForVTablePtr() reworked to
  getTBAAVTablePtrAccessInfo() that now returns the
  virtual-pointer access descriptor and not the virtual-point
  type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
  descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
  it is the only way to generate access tag by a given access
  descriptor. It is capable of producing both scalar and
  struct-path tags, depending on options and availability of the
  base access type. getTBAAScalarTagInfo() and its cache
  ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
  access tag should be a scalar or struct-path one,
  getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
  descriptor by a given QualType access type.

This is part of D37826 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D38503

llvm-svn: 314977
2017-10-05 10:47:51 +00:00
Ivan A. Kosarev a511ed7501 [CodeGen] Introduce generic TBAA access descriptors
With this patch we implement a concept of TBAA access descriptors
that are capable of representing both scalar and struct-path
accesses in a generic way.

This is part of D37826 reworked to be a separate patch to
simplify review.

Differential Revision: https://reviews.llvm.org/D38456

llvm-svn: 314780
2017-10-03 10:52:39 +00:00
Vedant Kumar 24792e3ab1 [ubsan] Add helpers to decide when null/vptr checks are required. NFC.
llvm-svn: 314750
2017-10-03 01:27:25 +00:00
Ivan A. Kosarev 289574edc0 [CodeGen] Do not refer to complete TBAA info where we actually deal with just TBAA access types
This patch fixes misleading names of entities related to getting,
setting and generation of TBAA access type descriptors.

This is effectively an attempt to provide a review for D37826 by
breaking it into smaller pieces.

Differential Revision: https://reviews.llvm.org/D38404

llvm-svn: 314657
2017-10-02 09:54:47 +00:00
Alexey Bataev f47c4b4184 [OPENMP] Generate implicit map|firstprivate clauses for target-based
directives.

If the variable is used in the target-based region but is not found in
any private|mapping clause, then generate implicit firstprivate|map
clauses for these implicitly mapped variables.

llvm-svn: 314205
2017-09-26 13:47:31 +00:00
Akira Hatanaka ba0367a708 [CodeGen][ObjC] Build the global block structure before emitting the
body of global block invoke functions.

This commit fixes an infinite loop in IRGen that occurs when compiling
the following code:

void FUNC2() {
  static void (^const block1)(int) = ^(int a){
    if (a--)
      block1(a);
  };
}

This is how IRGen gets stuck in the infinite loop:

1. GenerateBlockFunction is called to emit the body of "block1".

2. GetAddrOfGlobalBlock is called to get the address of "block1". The
   function calls getAddrOfGlobalBlockIfEmitted to check whether the
   global block has been emitted. If it hasn't been emitted, it then
   tries to emit the body of the block function by calling
   GenerateBlockFunction, which goes back to step 1.

This commit prevents the inifinite loop by building the global block in
GenerateBlockFunction before emitting the body of the block function.

rdar://problem/34541684

Differential Revision: https://reviews.llvm.org/D38118

llvm-svn: 314029
2017-09-22 21:32:06 +00:00
Alexey Bataev b7f18c3297 [OPENMP] Handle re-declaration of captured variables in CodeGen.
If the captured variable has re-declaration we may end up with the
situation where the captured variable is the re-declaration while the
referenced variable is the canonical declaration (or vice versa). In
this case we may generate wrong code. Patch fixes this situation.

llvm-svn: 313995
2017-09-22 16:56:13 +00:00
Vedant Kumar bb5d485cd3 [ubsan] Function Sanitizer: Don't require writable text segments
This change will make it possible to use -fsanitize=function on Darwin and
possibly on other platforms. It fixes an issue with the way RTTI is stored into
function prologue data.

On Darwin, addresses stored in prologue data can't require run-time fixups and
must be PC-relative. Run-time fixups are undesirable because they necessitate
writable text segments, which can lead to security issues. And absolute
addresses are undesirable because they break PIE mode.

The fix is to create a private global which points to the RTTI, and then to
encode a PC-relative reference to the global into prologue data.

Differential Revision: https://reviews.llvm.org/D37597

llvm-svn: 313096
2017-09-13 00:04:35 +00:00
Krasimir Georgiev 46dfb7a39d Updated two annotations for Store.h and CodeGenFunction.h.
Summary:
1.Updated annotations for include/clang/StaticAnalyzer/Core/PathSensitive/Store.h, which belong to the old version of clang.
2.Delete annotations for CodeGenFunction::getEvaluationKind() in clang/lib/CodeGen/CodeGenFunction.h, which belong to the old version of clang.

Reviewers: bkramer, krasimir, klimek

Reviewed By: bkramer

Subscribers: MTC

Differential Revision: https://reviews.llvm.org/D36330

Contributed by @MTC!

llvm-svn: 312790
2017-09-08 13:44:51 +00:00
Karl-Johan Karlsson 33e205a40f Debug info: Fixed faulty debug locations for attributed statements
Summary:
As the attributed statements are considered simple statements no
stoppoint was generated before emitting attributed do/while/for/range-
statement. This lead to faulty debug locations.

Reviewers: echristo, aaron.ballman, dblaikie

Reviewed By: dblaikie

Subscribers: bjope, aprantl, cfe-commits

Differential Revision: https://reviews.llvm.org/D37428

llvm-svn: 312623
2017-09-06 08:47:18 +00:00
Erich Keane 9937b134c5 [CodeGen]Refactor CpuSupports/CPUIs Builtin Code Gen to better work with
"target" implementation

A small set of refactors that'll make it easier for me to implement 'target' 
support.

First, extract the CPUSupports functionality into its own function. 
THis has the advantage of not wasting time in this builtin to deal with 
arguments.
Second, pulls both CPUSupports and CPUIs implementation into a member-function, 
so that it can be called from the resolver generation that I'm working on.
Third, creates an overload that takes simply the feature/cpu name (rather than 
extracting it from a callexpr), since that info isn't available later.

Note that despite how the 'diff' looks, the EmitX86CPUSupports function simply 
takes the implementation out of the 'switch'.

llvm-svn: 312355
2017-09-01 19:42:45 +00:00
Alex Lorenz 6cc8317c38 [IRGen] Evaluate constant static variables referenced through member
expressions

C++ allows us to reference static variables through member expressions. Prior to
this commit, non-integer static variables that were referenced using a member
expression were always emitted using lvalue loads. The old behaviour introduced
an inconsistency between regular uses of static variables and member expressions
uses. For example, the following program compiled and linked successfully:

struct Foo {
   constexpr static const char *name = "foo";
};
int main() {
  return Foo::name[0] == 'f';
}

but this program failed to link because "Foo::name" wasn't found:

struct Foo {
   constexpr static const char *name = "foo";
};
int main() {
  Foo f;
  return f.name[0] == 'f';
}

This commit ensures that constant static variables referenced through member
expressions are emitted in the same way as ordinary static variable references.

rdar://33942261

Differential Revision: https://reviews.llvm.org/D36876

llvm-svn: 311772
2017-08-25 10:07:00 +00:00
Alexey Bataev 6a71f364f1 [OPENMP] Fix for PR34014: OpenMP 4.5: Target construct in static method
of class fails to map class static variable.

If the global variable is captured and it has several redeclarations,
sometimes it may lead to a compiler crash. Patch fixes this by working
only with canonical declarations.

llvm-svn: 311479
2017-08-22 17:54:52 +00:00
Alexey Bataev 8c3edfef6b [OPENMP] Fix for PR28581: OpenMP linear clause - wrong results.
If worksharing construct has at least one linear item, an implicit
synchronization point must be emitted to avoid possible conflict with
the loading/storing values to the original variables. Added implicit
barrier if the linear item is found before actual start of the
worksharing construct.

llvm-svn: 311013
2017-08-16 15:58:46 +00:00
Reid Kleckner 2d3c421f1c Clean up some lambda conversion operator code, NFC
We don't need special handling in CodeGenFunction::GenerateCode for
lambda block pointer conversion operators anymore. The conversion
operator emission code immediately calls back to the generic
EmitFunctionBody.

Rename EmitLambdaStaticInvokeFunction to EmitLambdaStaticInvokeBody for
better consistency with the other Emit*Body methods.

I'm preparing to do something about PR28299, which touches this code.

llvm-svn: 310145
2017-08-04 22:38:06 +00:00
Vedant Kumar 10c3102071 [ubsan] Diagnose invalid uses of builtins (clang)
On some targets, passing zero to the clz() or ctz() builtins has undefined
behavior. I ran into this issue while debugging UB in __hash_table from libcxx:
the bug I was seeing manifested itself differently under -O0 vs -Os, due to a
UB call to clz() (see: libcxx/r304617).

This patch introduces a check which can detect UB calls to builtins.

llvm.org/PR26979

Differential Revision: https://reviews.llvm.org/D34590

llvm-svn: 309459
2017-07-29 00:19:51 +00:00
Erich Keane 0026ed2f9c Fix double destruction of objects when OpenMP construct is canceled
When an omp for loop is canceled the constructed objects are being destructed 
twice.

It looks like the desired code is:

{

  Obj o;
  If (cancelled) branch-through-cleanups to cancel.exit.

}
[cleanups]
cancel.exit:

__kmpc_for_static_fini
br cancel.cont (*)

cancel.cont:

__kmpc_barrier
return

The problem seems to be the branch to cancel.cont is currently also going 
through the cleanups calling them again. This change just does a direct branch 
instead.

Patch By: michael.p.rice@intel.com

Differential Revision: https://reviews.llvm.org/D35854

llvm-svn: 309288
2017-07-27 16:28:20 +00:00
Richard Smith ae8d62c9c5 Add branch weights to branches for static initializers.
The initializer for a static local variable cannot be hot, because it runs at
most once per program. That's not quite the same thing as having a low branch
probability, but under the assumption that the function is invoked many times,
modeling this as a branch probability seems reasonable.

For TLS variables, the situation is less clear, since the initialization side
of the branch can run multiple times in a program execution, but we still
expect initialization to be rare relative to non-initialization uses. It would
seem worthwhile to add a PGO counter along this path to make this estimation
more accurate in future.

For globals with guarded initialization, we don't yet apply any branch weights.
Due to our use of COMDATs, the guard will be reached exactly once per DSO, but
we have no idea how many DSOs will define the variable.

llvm-svn: 309195
2017-07-26 22:01:09 +00:00
Vedant Kumar 175b6d1f28 [ubsan] Teach the pointer overflow check that "p - <unsigned> <= p" (PR33430)
The pointer overflow check gives false negatives when dealing with
expressions in which an unsigned value is subtracted from a pointer.
This is summarized in PR33430 [1]: ubsan permits the result of the
subtraction to be greater than "p", but it should not.

To fix the issue, we should track whether or not the pointer expression
is a subtraction. If it is, and the indices are unsigned, we know to
expect "p - <unsigned> <= p".

I've tested this by running check-{llvm,clang} with a stage2
ubsan-enabled build. I've also added some tests to compiler-rt, which
are in D34122.

[1] https://bugs.llvm.org/show_bug.cgi?id=33430

Differential Revision: https://reviews.llvm.org/D34121

llvm-svn: 307955
2017-07-13 20:55:26 +00:00
Akira Hatanaka 2246167362 [Sema] Mark a virtual CXXMethodDecl as used if a call to it can be
devirtualized.

The code to detect devirtualized calls is already in IRGen, so move the
code to lib/AST and make it a shared utility between Sema and IRGen.

This commit fixes a linkage error I was seeing when compiling the
following code:

$ cat test1.cpp
struct Base {
  virtual void operator()() {}
};

template<class T>
struct Derived final : Base {
  void operator()() override {}
};

Derived<int> *d;

int main() {
  if (d)
    (*d)();
  return 0;
}

rdar://problem/33195657

Differential Revision: https://reviews.llvm.org/D34301

llvm-svn: 307883
2017-07-13 06:08:27 +00:00
Yaxun Liu cbf647cc3a CodeGen: Fix address space of global variable
Certain targets (e.g. amdgcn) require global variable to stay in global or constant address
space. In C or C++ global variables are emitted in the default (generic) address space.
This patch introduces virtual functions TargetCodeGenInfo::getGlobalVarAddressSpace
and TargetInfo::getConstantAddressSpace to handle this in a general approach.

It only affects IR generated for amdgcn target.

Differential Revision: https://reviews.llvm.org/D33842

llvm-svn: 307470
2017-07-08 13:24:52 +00:00
Vedant Kumar c34d343f15 [ubsan] Improve diagnostics for return value checks (clang)
This patch makes ubsan's nonnull return value diagnostics more precise,
which makes the diagnostics more useful when there are multiple return
statements in a function. Example:

1 |__attribute__((returns_nonnull)) char *foo() {
2 |  if (...) {
3 |    return expr_which_might_evaluate_to_null();
4 |  } else {
5 |    return another_expr_which_might_evaluate_to_null();
6 |  }
7 |} // <- The current diagnostic always points here!

runtime error: Null returned from Line 7, Column 2!
With this patch, the diagnostic would point to either Line 3, Column 5
or Line 5, Column 5.

This is done by emitting source location metadata for each return
statement in a sanitized function. The runtime is passed a pointer to
the appropriate metadata so that it can prepare and deduplicate reports.

Compiler-rt patch (with more tests): https://reviews.llvm.org/D34298

Differential Revision: https://reviews.llvm.org/D34299

llvm-svn: 306163
2017-06-23 21:32:38 +00:00
Yaxun Liu 84744c152a CodeGen: Cast temporary variable to proper address space
In C++ all variables are in default address space. Previously change has been
made to cast automatic variables to default address space. However that is
not sufficient since all temporary variables need to be casted to default
address space.

This patch casts all temporary variables to default address space except those
for passing indirect arguments since they are only used for load/store.

This patch only affects target having non-zero alloca address space.

Differential Revision: https://reviews.llvm.org/D33706

llvm-svn: 305711
2017-06-19 17:03:41 +00:00
Eric Fiselier cddaf8728f [coroutines] Allow co_await and co_yield expressions that return an lvalue to compile
Summary:
The title says it all.


Reviewers: GorNishanov, rsmith

Reviewed By: GorNishanov

Subscribers: rjmccall, cfe-commits

Differential Revision: https://reviews.llvm.org/D34194

llvm-svn: 305496
2017-06-15 19:43:36 +00:00
Vedant Kumar 6dbf4274a5 [ubsan] Detect invalid unsigned pointer index expression (clang)
Adding an unsigned offset to a base pointer has undefined behavior if
the result of the expression would precede the base. An example from
@regehr:

  int foo(char *p, unsigned offset) {
    return p + offset >= p; // This may be optimized to '1'.
  }

  foo(p, -1); // UB.

This patch extends the pointer overflow check in ubsan to detect invalid
unsigned pointer index expressions. It changes the instrumentation to
only permit non-negative offsets in pointer index expressions when all
of the GEP indices are unsigned.

Testing: check-llvm, check-clang run on a stage2, ubsan-instrumented
build.

Differential Revision: https://reviews.llvm.org/D33910

llvm-svn: 305216
2017-06-12 18:42:51 +00:00
Vedant Kumar a125eb55cb [ubsan] Add a check for pointer overflow UB
Check pointer arithmetic for overflow.

For some more background on this check, see:

  https://wdtz.org/catching-pointer-overflow-bugs.html
  https://reviews.llvm.org/D20322

Patch by Will Dietz and John Regehr!

This version of the patch is different from the original in a few ways:

  - It introduces the EmitCheckedInBoundsGEP utility which inserts
    checks when the pointer overflow check is enabled.

  - It does some constant-folding to reduce instrumentation overhead.

  - It does not check some GEPs in CGExprCXX. I'm not sure that
    inserting checks here, or in CGClass, would catch many bugs.

Possible future directions for this check:

  - Introduce CGF.EmitCheckedStructGEP, to detect overflows when
    accessing structures.

Testing: Apart from the added lit test, I ran check-llvm and check-clang
with a stage2, ubsan-instrumented clang. Will and John have also done
extensive testing on numerous open source projects.

Differential Revision: https://reviews.llvm.org/D33305

llvm-svn: 304459
2017-06-01 19:22:18 +00:00
Krzysztof Parzyszek 8b9897fff4 Restore and update documentation comment for EmitPointerWithAlignment
llvm-svn: 303419
2017-05-19 12:03:34 +00:00
NAKAMURA Takumi 76f938692f CodeGenFunction::EmitPointerWithAlignment(): Prune a \param in r303358, possibly obsolete. [-Wdocumentation]
llvm-svn: 303414
2017-05-19 10:19:59 +00:00
Krzysztof Parzyszek 8f248234fa [CodeGen] Propagate LValueBaseInfo instead of AlignmentSource
The functions creating LValues propagated information about alignment
source. Extend the propagated data to also include information about
possible unrestricted aliasing. A new class LValueBaseInfo will
contain both AlignmentSource and MayAlias info.

This patch should not introduce any functional changes.

Differential Revision: https://reviews.llvm.org/D33284

llvm-svn: 303358
2017-05-18 17:07:11 +00:00
Xiuli Pan be6da4bbdb [OpenCL] Add intel_reqd_sub_group_size attribute support
Summary:
Add intel_reqd_sub_group_size attribute support as intel extension  cl_intel_required_subgroup_size from
https://www.khronos.org/registry/OpenCL/extensions/intel/cl_intel_required_subgroup_size.txt

Reviewers: Anastasia, bader, hfinkel, pxli168

Reviewed By: Anastasia, bader, pxli168

Subscribers: cfe-commits, yaxunl

Differential Revision: https://reviews.llvm.org/D30805

llvm-svn: 302125
2017-05-04 07:31:20 +00:00
Sanjoy Das e369bd92da Adapt to LLVM's rename of WeakVH to WeakTrackingVH; NFC
llvm-svn: 301815
2017-05-01 17:08:00 +00:00
Akira Hatanaka a6b6dcc123 [CodeGen][ObjC] Don't retain captured Objective-C pointers at block
creation that are const-qualified.

When a block captures an ObjC object pointer, clang retains the pointer
to prevent prematurely destroying the object the pointer points to
before the block is called or copied.

When the captured object pointer is const-qualified, we can avoid
emitting the retain/release pair since the pointer variable cannot be
modified in the scope in which the block literal is introduced.

For example:

void test(const id x) {
    callee(^{ (void)x; });
}

This patch implements that optimization.

rdar://problem/28894510

Differential Revision: https://reviews.llvm.org/D32601

llvm-svn: 301667
2017-04-28 18:50:57 +00:00
Sanjoy Das a84ae0b943 Revert "Update to LLVM's use of WeakTrackingVH; NFC"
This reverts commit r301427.

llvm-svn: 301430
2017-04-26 16:37:51 +00:00
Sanjoy Das 2b5aa7c152 Update to LLVM's use of WeakTrackingVH; NFC
Summary: Depends on D32266

Reviewers: davide, dblaikie

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D32270

llvm-svn: 301427
2017-04-26 16:22:36 +00:00
Carlo Bertolli b0ff0a69c3 Recommit of
[OpenMP] Initial implementation of code generation for pragma 'distribute parallel for' on host

https://reviews.llvm.org/D29508

This patch makes the following additions:

It abstracts away loop bound generation code from procedures associated with pragma 'for' and loops in general, in such a way that the same procedures can be used for 'distribute parallel for' without the need for a full re-implementation.
It implements code generation for 'distribute parallel for' and adds regression tests. It includes tests for clauses.
It is important to notice that most of the clauses are implemented as part of existing procedures. For instance, firstprivate is already implemented for 'distribute' and 'for' as separate pragmas. As the implementation of 'distribute parallel for' is based on the same procedures, then we automatically obtain implementation for such clauses without the need to add new code. However, this requires regression tests that verify correctness of produced code.

llvm-svn: 301340
2017-04-25 17:52:12 +00:00
Carlo Bertolli f09daae75d Revert r301223
llvm-svn: 301233
2017-04-24 19:50:35 +00:00
Carlo Bertolli 4287d65c10 [OpenMP] Initial implementation of code generation for pragma 'distribute parallel for' on host
https://reviews.llvm.org/D29508

This patch makes the following additions:

1. It abstracts away loop bound generation code from procedures associated with pragma 'for' and loops in general, in such a way that the same procedures can be used for 'distribute parallel for' without the need for a full re-implementation.
2. It implements code generation for 'distribute parallel for' and adds regression tests. It includes tests for clauses.

It is important to notice that most of the clauses are implemented as part of existing procedures. For instance, firstprivate is already implemented for 'distribute' and 'for' as separate pragmas. As the implementation of 'distribute parallel for' is based on the same procedures, then we automatically obtain implementation for such clauses without the need to add new code. However, this requires regression tests that verify correctness of produced code.

Looking forward to comments.

llvm-svn: 301223
2017-04-24 19:26:11 +00:00
Vedant Kumar ffd7c887d6 [ubsan] Reduce alignment checking of C++ object pointers
This patch teaches ubsan to insert an alignment check for the 'this'
pointer at the start of each method/lambda. This allows clang to emit
significantly fewer alignment checks overall, because if 'this' is
aligned, so are its fields.

This is essentially the same thing r295515 does, but for the alignment
check instead of the null check. One difference is that we keep the
alignment checks on member expressions where the base is a DeclRefExpr.
There's an opportunity to diagnose unaligned accesses in this situation
(as pointed out by Eli, see PR32630).

Testing: check-clang, check-ubsan, and a stage2 ubsan build.

Along with the patch from D30285, this roughly halves the amount of
alignment checks we emit when compiling X86FastISel.cpp. Here are the
numbers from patched/unpatched clangs based on r298160.

  ------------------------------------------
  | Setup          | # of alignment checks |
  ------------------------------------------
  | unpatched, -O0 |                 24326 |
  | patched, -O0   |                 12717 | (-47.7%)
  ------------------------------------------

Differential Revision: https://reviews.llvm.org/D30283

llvm-svn: 300370
2017-04-14 22:03:34 +00:00
Evgeniy Stepanov 1a8030e737 [cfi] Emit __cfi_check stub in the frontend.
Previously __cfi_check was created in LTO optimization pipeline, which
means LLD has no way of knowing about the existence of this symbol
without rescanning the LTO output object. As a result, LLD fails to
export __cfi_check, even when given --export-dynamic-symbol flag.

llvm-svn: 299806
2017-04-07 23:00:38 +00:00