Add emission of metadata for simd loops in presence of 'simdlen' clause.
If 'simdlen' clause is provided without 'safelen' clause, the vectorizer width for the loop is set to value of 'simdlen' clause + all read/write ops in loop are marked with '!llvm.mem.parallel_loop_access' metadata.
If 'simdlen' clause is provided along with 'safelen' clause, the vectorizer width for the loop is set to value of 'simdlen' clause + all read/write ops in loop are not marked with '!llvm.mem.parallel_loop_access' metadata.
If 'safelen' clause is provided without 'simdlen' clause, the vectorizer width for the loop is set to value of 'safelen' clause + all read/write ops in loop are not marked with '!llvm.mem.parallel_loop_access' metadata.
llvm-svn: 245697
This is a bit of a step back of what we did in r222531, as there are
some corner cases in C++, where this kind of formatting is really bad.
Example:
Before:
virtual aaaaaaaaaaaaaaaa(std::function<bool()> IsKindWeWant = [&]() {
return true;
}, aaaaa aaaaaaaaa);
After:
virtual aaaaaaaaaaaaaaaa(std::function<bool()> IsKindWeWant =
[&]() { return true; },
aaaaa aaaaaaaaa);
The block formatting logic in JavaScript will probably go some other changes,
too, and we'll potentially be able to make the rules more consistent again. For
now, this seems to be the best approach for C++.
llvm-svn: 245694
Add parsing/sema analysis for 'simdlen' clause in simd directives. Also add check that if both 'safelen' and 'simdlen' clauses are specified, the value of 'simdlen' parameter is less than the value of 'safelen' parameter.
llvm-svn: 245692
Change the way EmulateInstruction::eContextPopRegisterOffStack handled
in UnwindAssemblyInstEmulation::WriteRegister to accomodate for
additional cases when eContextPopRegisterOffStack (pop PC/FLAGS).
llvm-svn: 245690
This is intended to improve code generation for GEPs, as the index value is
shifted by the element size and in GEPs of multi-dimensional arrays the index
of higher dimensions is multiplied by the lower dimension size.
Differential Revision: http://reviews.llvm.org/D12197
llvm-svn: 245689
Summary:
NPL used to be peppered with casts of the NativeThreadProtocol objects into NativeThreadLinux. I
move these closer to the source where we obtain these objects. This way, the rest of the code can
assume we are working with the correct type of objects.
Reviewers: ovyalov, tberghammer
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D12187
llvm-svn: 245681
While working around a bug in certain standard library implementations,
we would try to diagnose the issue so that library implementors would
fix their code. However, we assumed an entity being initialized was
a non-static data member subobject when other circumstances are
possible.
This fixes PR24526.
llvm-svn: 245675
According to standard the 'uval' modifier declares the address of the original list item to have an invariant value for all iterations of the associated loop(s). Patch improves codegen for this qualifier by removing usage of the original reference variable and replacing by referenced l-value.
llvm-svn: 245674
MachO and COFF do not support aliases. Restrict the alias to ELF targets. This
should also fix the Darwin build. Make the FNALIAS usage an error on non-ELF
targets.
llvm-svn: 245669
Note: I do not implement a base pointer, so it's still impossible to
have dynamic realignment AND dynamic alloca in the same function.
This also moves the code for determining the frame index reference
into getFrameIndexReference, where it belongs, instead of inline in
eliminateFrameIndex.
[Begin long-winded screed]
Now, stack realignment for Sparc is actually a silly thing to support,
because the Sparc ABI has no need for it -- unlike the situation on
x86, the stack is ALWAYS aligned to the required alignment for the CPU
instructions: 8 bytes on sparcv8, and 16 bytes on sparcv9.
However, LLVM unfortunately implements user-specified overalignment
using stack realignment support, so for now, I'm going to go along
with that tradition. GCC instead treats objects which have alignment
specification greater than the maximum CPU-required alignment for the
target as a separate block of stack memory, with their own virtual
base pointer (which gets aligned). Doing it that way avoids needing to
implement per-target support for stack realignment, except for the
targets which *actually* have an ABI-specified stack alignment which
is too small for the CPU's requirements.
Further unfortunately in LLVM, the default canRealignStack for all
targets effectively returns true, despite that implementing that is
something a target needs to do specifically. So, the previous behavior
on Sparc was to silently ignore the user's specified stack
alignment. Ugh.
Yet MORE unfortunate, if a target actually does return false from
canRealignStack, that also causes the user-specified alignment to be
*silently ignored*, rather than emitting an error.
(I started looking into fixing that last, but it broke a bunch of
tests, because LLVM actually *depends* on having it silently ignored:
some architectures (e.g. non-linux i386) have smaller stack alignment
than spilled-register alignment. But, the fact that a register needs
spilling is not known until within the register allocator. And by that
point, the decision to not reserve the frame pointer has been frozen
in place. And without a frame pointer, stack realignment is not
possible. So, canRealignStack() returns false, and
needsStackRealignment() then returns false, assuming everyone can just
go on their merry way assuming the alignment requirements were
probably just suggestions after-all. Sigh...)
Differential Revision: http://reviews.llvm.org/D12208
llvm-svn: 245668
For some reason, clang had been treating a command like:
clang -static -fPIC foo.c
as if it should be compiled without the PIC relocation model.
This was incorrect: -static should be affecting only the linking
model, and -fPIC only the compilation.
This new behavior also matches GCC.
This is a follow-up from a review comment on r245447.
Differential Revision: http://reviews.llvm.org/D12208
llvm-svn: 245667
Now that -Winfinite-recursion no longer uses recursive calls to before path
analysis, several bits of the code can be improved. The main changes:
1) Early return when finding a path to the exit block without a recursive call
2) Moving the states vector into checkForRecursiveFunctionCall instead of
passing it in by reference
3) Change checkForRecursiveFunctionCall to return a bool when the warning
should be emitted.
4) Use the State vector instead of storing it in the Stack vector.
llvm-svn: 245666
The module splitter splits a module into linkable partitions. It will
be used to implement parallel LTO code generation.
This initial version of the splitter does not attempt to deal with the
somewhat subtle symbol visibility issues around module splitting. These
will be dealt with in a future change.
Differential Revision: http://reviews.llvm.org/D12132
llvm-svn: 245662
This was breaking disassembly for arm machines that we force to be
thumb mode all the time because we were only checking for llvm::Triple::arm.
i.e.
armv6m (ARM Cortex-M0)
armv7m (ARM Cortex-M3)
armv7em (ARM Cortex-M4)
<rdar://problem/22334522>
llvm-svn: 245645
When producing conditional compare sequences for or operations we need
to negate the operands and the finally tested flags. The thing is if we negate
the finally tested flags this equals a logical negation of all previously
emitted expressions. There was a case missing where we have to order OR
expressions so they get emitted first.
This fixes http://llvm.org/PR24459
llvm-svn: 245641
Create CMP;CCMP sequences from and/or trees does not gain us anything if
the and/or tree is materialized to a GP register anyway. While most of
the code already checked for hasOneUse() there was one important case
missing.
llvm-svn: 245640