#pragma omp parallel needs an implicit barrier that is currently done by an explicit call to __kmpc_barrier. However, the runtime already ensures a barrier in __kmpc_fork_call which currently leads to two barriers per region per thread.
Differential Revision: http://reviews.llvm.org/D15561
llvm-svn: 255992
OpenMP codegen tried to emit the code for its constructs even if it was detected as a dead-code. Added checks to ensure that the code is emitted if the code is not dead.
llvm-svn: 255990
OpenMP 4.5 adds 'depend(source)' clause for 'ordered' directive to support cross-iteration dependence. Patch adds parsing and semantic analysis for this construct.
llvm-svn: 255986
Inspired by the bug reported in 25846. Whatever we end up doing about that one, the value handle change is a generally good one since it will help catch this type of mistake more quickly.
Patch by: Manuel Jacob
llvm-svn: 255984
Type specific declarations have been moved to Type.h and error handling
routines have been moved to ErrorHandling.h. Both are included in Core.h
so nothing should change for projects directly including the headers,
but transitive dependencies may be affected.
llvm-svn: 255965
When determining whether ownership was explicitly written for a
property when it is being synthesized, also consider that the original
property might have come from a protocol. Fixes rdar://problem/23931441.
llvm-svn: 255943
"thread-pcs" key is added to the T (questionmark) packet in
gdb-remote protocol so that lldb doesn't need to query the
pc values of every thread before it resumes a process.
The only odd part with this is that I'm sending the pc
values in big endian order, so we need to know the endianness
of the remote process before we can use them. All other
register values in gdb-remote protocol are sent in native-endian
format so this requirement doesn't exist. This addition is a
performance enhancement -- lldb will fall back to querying the
pc of each thread individually if it needs to -- so when
we don't have the byte order for the process yet, we don't
use these values. Practically speaking, the only way I've
been able to elicit this condition is for the first
T packet when we attach to a process.
<rdar://problem/21963031>
llvm-svn: 255942
Use the 3-byte (4 with REX prefix) push-pop sequence for materializing
small constants. This is smaller than using a mov (5, 6 or 7 bytes
depending on size and REX prefix), but it's likely to be slower, so
only used for 'minsize'.
This is a follow-up to r255656.
Differential Revision: http://reviews.llvm.org/D15549
llvm-svn: 255936
The current BranchProbability::normalizeProbabilities() forbids known and
unknown probabilities to coexist in the list. This was once used to help
capture probability exceptions but has caused some reported build
failures (https://llvm.org/bugs/show_bug.cgi?id=25838).
This patch removes this restriction by evenly distributing the complement
of the sum of all known probabilities to unknown ones. We could still
treat this as an abnormal behavior, but it is better to emit warnings in
our future profile validator.
Differential revision: http://reviews.llvm.org/D15548
llvm-svn: 255934
* Pull in host-only implementations of few CUDA-specific math functions.
* #nclude <cmath> early to prevent its inclusion from CUDA headers after
they've messed with __THROW macro.
llvm-svn: 255933
This extends the same line of reasoning used in EarlyCSE w/http://reviews.llvm.org/D15352 to the DSE implementation in InstCombine.
Key points:
* We only remove unordered or simple stores.
* The loads producing values consumed by dead stores don't influence whether the store is dead.
Differential Revision: http://reviews.llvm.org/D15354
llvm-svn: 255932
Summary:
I didn't realize that we already allowed atomic load/store of pointers,
it was added in 2012 by r162146. This patch updates the documentation
and tightens the verifier by using DataLayout to make sure that the
stored size is byte-sized and power-of-two. DataLayout is also used for
integers, and while I'm here I updated the corresponding code for
cmpxchg and rmw.
See the following discussion for context and upcoming changes to
add floating-point and vector atomics:
https://groups.google.com/forum/#!topic/llvm-dev/Nh0P_E3CRoo/discussion
Reviewers: reames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15512
llvm-svn: 255931
Aaron Ballman pointed out a typo from the copy and paste in r255875. This will
preserve the strict weak ordering when comparing DynTypedNode.
llvm-svn: 255929
The patch fixes Bug 25759 produced by inappropriate handling of unsigned
maximum SCEV expressions by SCEVRemoveMax. Without a fix, we get an infinite
loop and a segmentation fault, if we try to process, for example,
'((-1 + (-1 * %b1)) umax {(-1 + (-1 * %yStart)),+,-1}<%.preheader>)'.
It also fixes a potential issue related to signed maximum SCEV expressions.
Tested-by: Roman Gareev <gareevroman@gmail.com>
Fixed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: http://reviews.llvm.org/D15563
llvm-svn: 255922
This sets the maximum entry count among all functions in the program to the module using module flags. This allows the optimizer to use this information.
Differential Revision: http://reviews.llvm.org/D15163
llvm-svn: 255918
The rules for removing trivially dead stores are a lot less complicated than loads. Since we know the later store post dominates the former and the former dominates the later, unless the former has side effects other than the actual store, we can remove it. One slightly surprising thing is that we can freely remove atomic stores, even if the later one isn't atomic. There's no guarantee the atomic one was every visible.
For the moment, we don't handle DSE of ordered atomic stores. We could extend the same chain of reasoning to them, but the catch is we'd then have to model the ordering effect without a store instruction. Since our fences are a stronger than our operation orderings, simple using a fence isn't an obvious win. This arguable calls for a refinement in our fence specification, but that's (much) later work.
Differential Revision: http://reviews.llvm.org/D15352
llvm-svn: 255914
C++ emits vtables for classes that have key function present in the
current TU. While we compile CUDA the fact that key function was found
in this TU does not mean that we are going to generate code for it. E.g.
vtable for a class with host-only methods should not (and can not) be
generated on device side, because we'll never generate code for them
during device-side compilation.
This patch adds an extra CUDA-specific check during key method computation
and filters out potential key methods that are not suitable for this side
of CUDA compilation.
When we codegen vtable, entries for unsuitable methods are set to null.
Differential Revision: http://reviews.llvm.org/D15309
llvm-svn: 255911
Summary:
Second patch split out from http://reviews.llvm.org/D14752.
Maps metadata as a post-pass from each module when importing complete,
suturing up final metadata to the temporary metadata left on the
imported instructions.
This entails saving the mapping from bitcode value id to temporary
metadata in the importing pass, and from bitcode value id to final
metadata during the metadata linking postpass.
Depends on D14825.
Reviewers: dexonsmith, joker.eph
Subscribers: davidxl, llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D14838
llvm-svn: 255909