Commit Graph

12 Commits

Author SHA1 Message Date
Jonas Hahnfeld 071dca24ce [OpenMP] Require trivially copyable type for mapping
A trivially copyable type provides a trivial copy constructor and a trivial
copy assignment operator. This is enough for the runtime to memcpy the data
to the device. Additionally there must be no virtual functions or virtual
base classes and the destructor is guaranteed to be trivial, ie performs
no action.
The runtime does not require trivial default constructors because on alloc
the memory is undefined. Thus, weaken the warning to be only issued if the
mapped type is not trivially copyable.

Differential Revision: https://reviews.llvm.org/D71134
2019-12-07 13:31:46 +01:00
Alexey Bataev 8a8c69808c [OPENMP]Add support for analysis of reduction variables.
Summary:
Reduction variables are the variables, for which the private copies
must be created in the OpenMP regions. Then they are initialized with
the predefined values depending on the reduction operation. After exit
from the OpenMP region the original variable is updated using the
reduction value and the value of the original reduction variable.

Reviewers: NoQ

Subscribers: guansong, jdoerfert, caomhin, kkwli0, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D65106

llvm-svn: 367116
2019-07-26 14:50:05 +00:00
Alexey Bataev a914888b49 [OPENMP]Add -Wunintialized to the erroneous tests for future fix PR42392,
NFC.

llvm-svn: 365334
2019-07-08 15:45:24 +00:00
Alexey Bataev e04483ee35 [OPENMP]Initial support for 'allocate' clause.
Added parsing/sema analysis of the allocate clause.

llvm-svn: 357068
2019-03-27 14:14:31 +00:00
Joel E. Denny d2649292ef [OpenMP] Refactor const restriction for reductions
As discussed in D56113, this patch refactors the implementation of the
const restriction for reductions to reuse a function introduced by
D56113.  A side effect is that diagnostics sometimes now say
"variable" instead of "list item" when a list item is a variable.

Reviewed By: ABataev

Differential Revision: https://reviews.llvm.org/D56298

llvm-svn: 350440
2019-01-04 22:11:56 +00:00
Alexey Bataev f07946e101 [OPENMP]Fix PR39372: Does not complain about loop bound variable not
being shared.

According to the standard, the variables with unspecified data-sharing
attributes in presence of `default(none)` clause must be reported to
users. Compiler did not generate error reports for the variables used in
other OpenMP regions. Patch fixes this.

llvm-svn: 345533
2018-10-29 20:17:42 +00:00
Alexey Bataev a8a9153a37 [OPENMP] Support for -fopenmp-simd option with compilation of simd loops
only.

Added support for -fopenmp-simd option that allows compilation of
simd-based constructs without emission of OpenMP runtime calls.

llvm-svn: 321560
2017-12-29 18:07:07 +00:00
Alexey Bataev 4d4624c20c [OPENMP] Fix DSA processing for member declaration.
If the member declaration is captured in the OMPCapturedExprDecl, we may
loose data-sharing attribute info for this declaration. Patch fixes this
bug.

llvm-svn: 308629
2017-07-20 16:47:47 +00:00
David Sheinkman 92589990ba [OpenMP] fix segfault when a variable referenced in reduction clause is a reference parameter\nDifferential Revision: http://reviews.llvm.org/D24524
llvm-svn: 283223
2016-10-04 14:41:36 +00:00
Carlo Bertolli 9925f15661 Resubmission of http://reviews.llvm.org/D21564 after fixes.
[OpenMP] Initial implementation of parse and sema for composite pragma 'distribute parallel for'

This patch is an initial implementation for #distribute parallel for.
The main differences that affect other pragmas are:

The implementation of 'distribute parallel for' requires blocking of the associated loop, where blocks are "distributed" to different teams and iterations within each block are scheduled to parallel threads within each team. To implement blocking, sema creates two additional worksharing directive fields that are used to pass the team assigned block lower and upper bounds through the outlined function resulting from 'parallel'. In this way, scheduling for 'for' to threads can use those bounds.
As a consequence of blocking, the stride of 'distribute' is not 1 but it is equal to the blocking size. This is returned by the runtime and sema prepares a DistIncrExpr variable to hold that value.
As a consequence of blocking, the global upper bound (EnsureUpperBound) expression of the 'for' is not the original loop upper bound (e.g. in for(i = 0 ; i < N; i++) this is 'N') but it is the team-assigned block upper bound. Sema creates a new expression holding the calculation of the actual upper bound for 'for' as UB = min(UB, PrevUB), where UB is the loop upper bound, and PrevUB is the team-assigned block upper bound.

llvm-svn: 273884
2016-06-27 14:55:37 +00:00
Carlo Bertolli b8503d5399 Revert r273705
[OpenMP] Initial implementation of parse and sema for composite pragma 'distribute parallel for'

llvm-svn: 273709
2016-06-24 19:20:02 +00:00
Carlo Bertolli e77d6e0e4d [OpenMP] Initial implementation of parse and sema for composite pragma 'distribute parallel for'
http://reviews.llvm.org/D21564

This patch is an initial implementation for #distribute parallel for.
The main differences that affect other pragmas are:

The implementation of 'distribute parallel for' requires blocking of the associated loop, where blocks are "distributed" to different teams and iterations within each block are scheduled to parallel threads within each team. To implement blocking, sema creates two additional worksharing directive fields that are used to pass the team assigned block lower and upper bounds through the outlined function resulting from 'parallel'. In this way, scheduling for 'for' to threads can use those bounds.
As a consequence of blocking, the stride of 'distribute' is not 1 but it is equal to the blocking size. This is returned by the runtime and sema prepares a DistIncrExpr variable to hold that value.
As a consequence of blocking, the global upper bound (EnsureUpperBound) expression of the 'for' is not the original loop upper bound (e.g. in for(i = 0 ; i < N; i++) this is 'N') but it is the team-assigned block upper bound. Sema creates a new expression holding the calculation of the actual upper bound for 'for' as UB = min(UB, PrevUB), where UB is the loop upper bound, and PrevUB is the team-assigned block upper bound.

llvm-svn: 273705
2016-06-24 18:53:35 +00:00