Summary:
LSV used to abort vectorizing a chain for interleaved load/store accesses that alias.
Allow a valid prefix of the chain to be vectorized, mark just the prefix and retry vectorizing the remaining chain.
Reviewers: llvm-commits, jlebar, arsenm
Subscribers: mzolotukhin
Differential Revision: http://reviews.llvm.org/D22119
llvm-svn: 275317
See http://reviews.llvm.org/D22079
Changes the Archive::child_begin and Archive::children to require a reference
to an Error. If iterator increment fails (because the archive header is
damaged) the iterator will be set to 'end()', and the error stored in the
given Error&. The Error value should be checked by the user immediately after
the loop. E.g.:
Error Err;
for (auto &C : A->children(Err)) {
// Do something with archive child C.
}
// Check the error immediately after the loop.
if (Err)
return Err;
Failure to check the Error will result in an abort() when the Error goes out of
scope (as guaranteed by the Error class).
llvm-svn: 275316
This patch is to add two additional tests for testing 'distribute parallel for simd' pragma with disallowed clauses and loops.
Differential Revision: http://reviews.llvm.org/D22169
llvm-svn: 275315
Currently the MIR framework prints all its outputs (errors and actual
representation) on stderr.
This patch fixes that by printing the regular output in the output
specified with -o.
Differential Revision: http://reviews.llvm.org/D22251
llvm-svn: 275314
This patch is to add two additional tests for testing 'distribute simd' pragma with disallowed clauses and loops.
Differential Revision: http://reviews.llvm.org/D22176
llvm-svn: 275306
Patch by H.J Lu.
For x86-64 psABI, the entry size of .got and .got.plt sections is 8
bytes for both LP64 and ILP32. Add GotEntrySize and GotPltEntrySize
to ELF target instead of using size of ELFT::uint. Now we can generate
a simple working x32 executable.
Differential Revision: http://reviews.llvm.org/D22288
llvm-svn: 275301
Config members are named after corresponding command line options.
This patch renames VAStart ImageBase so that they are in line with
--image-base.
Differential Revision: http://reviews.llvm.org/D22277
llvm-svn: 275298
This is to follow the convention of naming the test after the name
of the option.
Differential Revision: http://reviews.llvm.org/D22276
llvm-svn: 275295
Doing "I++" inside of an EXPECT_* triggers
warning: expression with side effects has no effect in an unevaluated context
because EXPECT_* partially expands to
EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(MockObjects[I++] + 1)) == 1)>
which is an unevaluated context.
llvm-svn: 275293
Summary: Normally when you do a bitwise operation on an enum value, you
get back an instance of the underlying type (e.g. int). But using this
macro, bitwise ops on your enum will return you back instances of the
enum. This is particularly useful for enums which represent a
combination of flags.
Suppose you have a function which takes an int and a set of flags. One
way to do this would be to take two numeric params:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, int Flags);
void foo() {
Fn(42, F2 | F3);
}
But now if you get the order of arguments wrong, you won't get an error.
You might try to fix this by changing the signature of Fn so it accepts
a SomeFlags arg:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, static_cast<SomeFlags>(F2 | F3));
}
But now we need a static cast after doing "F2 | F3" because the result
of that computation is the enum's underlying type.
This patch adds a mechanism which gives us the safety of the second
approach with the brevity of the first.
enum SomeFlags {
F1 = 1, F2 = 2, F3 = 4, ..., F_MAX = 128,
LLVM_MARK_AS_BITMASK_ENUM(F_MAX)
};
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, F2 | F3); // No static_cast.
}
The LLVM_MARK_AS_BITMASK_ENUM macro enables overloads for bitwise
operators on SomeFlags. Critically, these operators return the enum
type, not its underlying type, so you don't need any static_casts.
An advantage of this solution over the previously-proposed BitMask class
[0, 1] is that we don't need any wrapper classes -- we can operate
directly on the enum itself.
The approach here is somewhat similar to OpenOffice's typed_flags_set
[2]. But we skirt the need for a wrapper class (and a good deal of
complexity) by judicious use of enable_if. We SFINAE on the presence of
a particular enumerator (added by the LLVM_MARK_AS_BITMASK_ENUM macro)
instead of using a traits class so that it's impossible to use the enum
before the overloads are present. The solution here also seamlessly
works across multiple namespaces.
[0] http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20150622/283369.html
[1] http://lists.llvm.org/pipermail/llvm-commits/attachments/20150623/073434b6/attachment.obj
[2] https://cgit.freedesktop.org/libreoffice/core/tree/include/o3tl/typed_flags_set.hxx
Reviewers: chandlerc, rsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22279
llvm-svn: 275292
Because of the goop involved in the EXPECT_EQ macro, we were getting the
following warning
expression with side effects has no effect in an unevaluated context
because the "I++" was being used inside of a template type:
switch (0) case 0: default: if (const ::testing::AssertionResult gtest_ar = (::testing::internal:: EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(Args[I++])) == 1)>::Compare("Args[I++]", "&A", Args[I++], &A))) ; else ::testing::internal::AssertHelper(::testing::TestPartResult::kNonFatalFailure, "../src/unittests/IR/FunctionTest.cpp", 94, gtest_ar.failure_message()) = ::testing::Message();
llvm-svn: 275291
This encourages checkers to make logical decisions depending on
value of which region was the symbol under consideration
introduced to denote.
A similar technique is already used in a couple of checkers;
they were modified to call the new method.
Differential Revision: http://reviews.llvm.org/D22242
llvm-svn: 275290
In D21740, we discussed trying to make this a more general matcher. However, I didn't see a clean
way to handle the regular m_Not cases and these non-splat vector patterns, so I've opted for the
direct approach here. If there are other potential uses of areInverseVectorBitmasks(), we could
move that helper function to a higher level.
There is an open question as to which is of these forms should be considered the canonical IR:
%sel = select <4 x i1> <i1 true, i1 false, i1 false, i1 true>, <4 x i32> %a, <4 x i32> %b
%shuf = shufflevector <4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 0, i32 5, i32 6, i32 3>
Differential Revision: http://reviews.llvm.org/D22114
llvm-svn: 275289
Summary:
v2: don't count SGPRs spilled to scratch twice
I think this is sufficient. It doesn't count private memory usage, which
happens often and uses scratch but isn't technically a spill. The private
memory usage can be computed by:
[scratch_per_thread - vgpr_spills - a random multiple of SGPR spills].
The fact SGPR spills add very high numbers to the scratch size make that
computation a guessing game, but I don't have a solution to that.
Reviewers: tstellarAMD
Subscribers: arsenm, kzhuravl
Differential Revision: http://reviews.llvm.org/D22197
llvm-svn: 275288
Summary:
The function FunctionCaller::WriteFunctionArguments returns false on
errors, so they should check for the false return value.
Change by Walter Erquinigo <a20012251@gmail.com>
Reviewers: jingham, clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D22278
llvm-svn: 275287
Background: symbols and functions can be looked up by full mangled name and by basename. SymbolFile and ObjectFile are expected to be able to do the lookups based on full mangled name or by basename, so when the user types something that is incomplete, we must be able to look it up efficiently. For example the user types "a:🅱️:c" as a symbol to set a breakpoint on, we will break this down into a 'lookup "c"' and then weed out N matches down to just the ones that match "a:🅱️:c". Previously this was done manaully in many functions by calling Module::PrepareForFunctionNameLookup(...) and then doing the lookup and manually pruning the results down afterward with duplicated code. Now all places use Module::LookupInfo to do the work in one place.
This allowed me to fix the name lookups to look for "func" with eFunctionNameTypeFull as the "name_type_mask", and correctly weed the results:
"func", "func()", "func(int)", "a::func()", "b::func()", and "a:🅱️:func()" down to just "func", "func()", "func(int)". Previously we would have set 6 breakpoints, now we correctly set just 3. This also extends to the expression parser when it looks up names for functions it needs to not get multiple results so we can call the correct function.
<rdar://problem/24599697>
llvm-svn: 275281
Summary:
The patch extends include-fixer's "-output-headers", and "-insert-headers"
command line options to make it dump more information (e.g. QualifiedSymbol),
so that vim-integration can add missing qualifiers.
Reviewers: bkramer
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D22299
llvm-svn: 275279
While testing a follow-on change to enable index-based symbol resolution
and internalization in the distributed backends, I realized that a test
case change I made in r275247 was only required because we were not
analyzing symbols in the claimed files in thinlto-index-only mode.
In the fixed test case there should be no internalization because we are
linking in -shared mode, so f() is in fact exported, which is detected
properly when we analyze symbols in thinlto-index-only mode. Note that
this is not (yet) a correctness issue (because we are not yet performing
the index-based linkage optimizations in the distributed backends -
that's coming in a follow-on patch).
llvm-svn: 275277
We know that pcmp produces all-ones/all-zeros bitmasks, so we can use that behavior to avoid unnecessary constant loading.
One could argue that load+and is actually a better solution for some CPUs (Intel big cores) because shifts don't have the
same throughput potential as load+and on those cores, but that should be handled as a CPU-specific later transformation if
it ever comes up. Removing the load is the more general x86 optimization. Note that the uneven usage of vpbroadcast in the
test cases is filed as PR28505:
https://llvm.org/bugs/show_bug.cgi?id=28505
Differential Revision: http://reviews.llvm.org/D22225
llvm-svn: 275276
Add a new pass to serve as basis for automatic accelerator mapping in Polly.
The pass structure and the analyses preserved are copied from
CodeGeneration.cpp, as we will rely on IslNodeBuilder and IslExprBuilder for
LLVM-IR code generation.
Polly's accelerator code generation is enabled with -polly-target=gpu
I would like to use this commit as opportunity to thank Yabin Hu for his work in
the context of two Google summer of code projects during which he implemented
initial prototypes of the Polly accelerator code generation -- in parts this
code is already available in todays Polly (e.g., tools/GPURuntime). More will
come as part of the upcoming Polly ACC changes.
Reviewers: Meinersbur
Subscribers: pollydev, llvm-commits
Differential Revision: http://reviews.llvm.org/D22036
llvm-svn: 275275
ppcg will be used to provide mapping decisions for GPU code generation.
As we do not use C as input language, we do not include pet. However, we include
pet.h from pet 82cacb71 plus a set of dummy functions to ensure ppcg links
without problems.
The version of ppcg committed is unmodified ppcg-0.04 which has been well tested
in the context of LLVM. It does not provide an official library interface yet,
which means that in upcoming commits we will add minor modifications to make
necessary functionality accessible. We will aim to upstream these modifications
after we gained enough experience with GPU generation support in Polly to
propose a stable interface.
Reviewers: Meinersbur
Subscribers: pollydev, llvm-commits
Differential Revision: http://reviews.llvm.org/D22033
llvm-svn: 275274
http://reviews.llvm.org/D21904
This patch is similar to the implementation of 'private' clause: it adds a list of private pointers to be used within the target data region to store the device pointers returned by the runtime.
Please refer to the following document for a full description of what the runtime witll return in this case (page 10 and 11):
https://github.com/clang-omp/OffloadingDesign
I am happy to answer any question related to the runtime interface to help reviewing this patch.
llvm-svn: 275271
Minor cleanup.
Currently it looks wierd that having method addPredefinedSections()
we still add 2 sections outside it without real reasons.
Patch fixes that.
Differential revision: http://reviews.llvm.org/D19981
llvm-svn: 275269
This is to allow distributed build systems, that do not preserve time stamps, to use PCH files.
Second and last part of the patch proposed at:
Differential Revision: http://reviews.llvm.org/D20867
llvm-svn: 275267