A trivially copyable type provides a trivial copy constructor and a trivial
copy assignment operator. This is enough for the runtime to memcpy the data
to the device. Additionally there must be no virtual functions or virtual
base classes and the destructor is guaranteed to be trivial, ie performs
no action.
The runtime does not require trivial default constructors because on alloc
the memory is undefined. Thus, weaken the warning to be only issued if the
mapped type is not trivially copyable.
Differential Revision: https://reviews.llvm.org/D71134
Summary:
Firstprivate variables are the variables, for which the private copies
must be created in the OpenMP regions and must be initialized with the
original values. Thus, we must report if the uninitialized variable is
used as firstprivate.
Reviewers: NoQ
Subscribers: guansong, jdoerfert, caomhin, kkwli0, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64765
llvm-svn: 366689
variables.
Loop control variables are private in loop-based constructs and we shall
take this into account when generate the code for inner constructs.
Currently, those variables are reported as shared in many cases. Moved
the analysis of the data-sharing attributes of the loop control variable
to an early semantic stage to correctly handle their attributes.
llvm-svn: 366474
being shared.
According to the standard, the variables with unspecified data-sharing
attributes in presence of `default(none)` clause must be reported to
users. Compiler did not generate error reports for the variables used in
other OpenMP regions. Patch fixes this.
llvm-svn: 345533
only.
Added support for -fopenmp-simd option that allows compilation of
simd-based constructs without emission of OpenMP runtime calls.
llvm-svn: 321560
distribute directives.
OpenMP standard does not allow to mark the variables as firstprivate and lastprivate at the same time in distribute-based directives. Patch fixes this problem.
llvm-svn: 319560
[OpenMP] Initial implementation of parse and sema for composite pragma 'distribute parallel for'
This patch is an initial implementation for #distribute parallel for.
The main differences that affect other pragmas are:
The implementation of 'distribute parallel for' requires blocking of the associated loop, where blocks are "distributed" to different teams and iterations within each block are scheduled to parallel threads within each team. To implement blocking, sema creates two additional worksharing directive fields that are used to pass the team assigned block lower and upper bounds through the outlined function resulting from 'parallel'. In this way, scheduling for 'for' to threads can use those bounds.
As a consequence of blocking, the stride of 'distribute' is not 1 but it is equal to the blocking size. This is returned by the runtime and sema prepares a DistIncrExpr variable to hold that value.
As a consequence of blocking, the global upper bound (EnsureUpperBound) expression of the 'for' is not the original loop upper bound (e.g. in for(i = 0 ; i < N; i++) this is 'N') but it is the team-assigned block upper bound. Sema creates a new expression holding the calculation of the actual upper bound for 'for' as UB = min(UB, PrevUB), where UB is the loop upper bound, and PrevUB is the team-assigned block upper bound.
llvm-svn: 273884
http://reviews.llvm.org/D21564
This patch is an initial implementation for #distribute parallel for.
The main differences that affect other pragmas are:
The implementation of 'distribute parallel for' requires blocking of the associated loop, where blocks are "distributed" to different teams and iterations within each block are scheduled to parallel threads within each team. To implement blocking, sema creates two additional worksharing directive fields that are used to pass the team assigned block lower and upper bounds through the outlined function resulting from 'parallel'. In this way, scheduling for 'for' to threads can use those bounds.
As a consequence of blocking, the stride of 'distribute' is not 1 but it is equal to the blocking size. This is returned by the runtime and sema prepares a DistIncrExpr variable to hold that value.
As a consequence of blocking, the global upper bound (EnsureUpperBound) expression of the 'for' is not the original loop upper bound (e.g. in for(i = 0 ; i < N; i++) this is 'N') but it is the team-assigned block upper bound. Sema creates a new expression holding the calculation of the actual upper bound for 'for' as UB = min(UB, PrevUB), where UB is the loop upper bound, and PrevUB is the team-assigned block upper bound.
llvm-svn: 273705