In D12090, the ExprValueMap was added to reuse existing value during SCEV expansion.
However, const folding and sext/zext distribution can make the reuse still difficult.
A simplified case is: suppose we know S1 expands to V1 in ExprValueMap, and
S1 = S2 + C_a
S3 = S2 + C_b
where C_a and C_b are different SCEVConstants. Then we'd like to expand S3 as
V1 - C_a + C_b instead of expanding S2 literally. It is helpful when S2 is a
complex SCEV expr and S2 has no entry in ExprValueMap, which is usually caused
by the fact that S3 is generated from S1 after const folding.
In order to do that, we represent ExprValueMap as a mapping from SCEV to
ValueOffsetPair. We will save both S1->{V1, 0} and S2->{V1, C_a} into the
ExprValueMap when we create SCEV for V1. When S3 is expanded, it will first
expand S2 to V1 - C_a because of S2->{V1, C_a} in the map, then expand S3 to
V1 - C_a + C_b.
Differential Revision: https://reviews.llvm.org/D21313
llvm-svn: 276136
Lots of blocks had "llvm" or "nasm" syntax types but either weren't following
the syntax, or the syntax has changed (and sphinx hasn't keep up) or the type
doesn't even exist (nasm?).
Other documents had :options: what were invalid. I only removed those that had
warnings, and left the ones that didn't, in order to follow the principle of
least surprise.
This is like this for ages, but the buildbot is now failing on errors. It may
take a while to upgrade the buildbot's sphinx, if that's even possible, but
that shouldn't stop us from getting docs updates (which seem down for quite
a while).
Also, we're not losing any syntax highlight, since when it doesn't parse, it
doesn't colour. Ie. those blocks are not being highlighted anyway.
I'm trying to get all docs in one go, so that it's easy to revert later if we
do fix, or at least easy to know what's to fix.
llvm-svn: 276109
Summary: In r275989 we enabled the folding of `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))`. Here we add more test cases to assure this folding works for all logical operations `and`/`or`/`xor`.
Reviewers: grosser
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22561
Contributed-by: Matthias Reisinger
llvm-svn: 276105
This patch adds costs for the vectorized implementations of CTPOP, the default values were seriously underestimating the cost of these and was encouraging vectorization on targets where serialized use of POPCNT would be much better.
Differential Revision: https://reviews.llvm.org/D22456
llvm-svn: 276104
Retry r275776 (no changes, we suspect the issue was with another commit).
The current logic for handling inline asm operands in DAGToDAGISel interprets
the operands by looking for constants, which should represent the flags
describing the kind of operand we're dealing with (immediate, memory, register
def etc). The operands representing actual data are skipped only if they are
non-const, with the exception of immediate operands which are skipped explicitly
when a flag describing an immediate is found.
The oversight is that memory operands may be const too (e.g. for device drivers
reading a fixed address), so we should explicitly skip the operand following a
flag describing a memory operand. If we don't, we risk interpreting that
constant as a flag, which is definitely not intended.
Fixes PR26038
Differential Revision: https://reviews.llvm.org/D22103
llvm-svn: 276101
This document was crafted from the various (320+) emails between 2nd June and
20th July regarding the move to GitHub. It tried to consolidate every issue that
was raised and every solution that was presented to have a GitHub repository
with sub-modules.
It *does not* try to argue whether sub-modules are better or worse than any other
Git solution, nor if Git is better than any other VCS, nor if GitHub is better
than any other free code hosting service. This is just the final conclusions of
48 days and 320 emails (plus a lot of IRC discussions) on the LLVM community.
This document will be presented at the survey that the foundation will setup for
us to decide if we move to this solution or not. It reflects what was discussed
on the lists, but it's not authoritative. If something is not clear enough,
please refer to the mailing list discussions (hint: search for "GitHub").
Review: https://reviews.llvm.org/D22463
llvm-svn: 276097
Inference of the 'returned' attribute was fixed in r276008, lets try
turning the backend support back on.
This reverts commit r275677.
llvm-svn: 276081
Summary:
getVectorizablePrefix previously didn't work properly in the face of
aliasing loads/stores. It unwittingly assumed that the loads/stores
appeared in the BB in address order. If they didn't, it would do the
wrong thing.
Reviewers: asbirlea, tstellarAMD
Subscribers: arsenm, llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D22535
llvm-svn: 276072
Reverting this commit for now as it seems to be causing failures on
test-suite tests on the clang-ppc64le-linux-lnt bot.
This reverts commit r276044.
llvm-svn: 276068
Add a check that the layout predecessor of a block is an actual CFG
predecssor of the block as well. No current code fails this check, but
upcoming patches can trigger this, and it makes sense to separate it
out.
llvm-svn: 276066
Revert "[LoopSimplify] Update LCSSA after separating nested loops."
This reverts commit r275891.
Revert "[LCSSA] Post-process PHI-nodes created by SSAUpdate when constructing LCSSA form."
This reverts commit r275883.
llvm-svn: 276064
We just set PreserveLCSSA to always true since we don't have an
analogous method `mustPreserveAnalysisID(LCSSA)`.
Also port LoopInfo verifier pass to test LoopUnrollPass.
llvm-svn: 276063
canTailDuplicate accepts two blocks and returns true if the first can be
duplicated into the second successfully. Use this function to
encapsulate the heuristic.
llvm-svn: 276062
Summary:
Functions like "slice" and "drop_front" sound like they might mutate the
underlying object, but they don't. Warning on unused results would have
saved me an hour yesterday, and I'm sure I'm not the only one.
LLVM and Clang are clean wrt this warning after D22540.
Reviewers: majnemer
Subscribers: sanjoy, chandlerc, llvm-commits
Differential Revision: https://reviews.llvm.org/D22541
llvm-svn: 276058
Summary: substr doesn't modify the string, so this line has no effect.
Reviewers: majnemer
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22540
llvm-svn: 276057
Summary:
Previously, the insertion point for stores was the last instruction in
Chain *before calling getVectorizablePrefixEndIdx*. Thus if
getVectorizablePrefixEndIdx didn't return Chain.size(), we still would
insert at the last instruction in Chain.
This patch changes our internal API a bit in an attempt to make it less
prone to this sort of error. As a result, we end up recalculating the
Chain's boundary instructions, but I think worrying about the speed hit
of this is a premature optimization right now.
Reviewers: asbirlea, tstellarAMD
Subscribers: mzolotukhin, arsenm, llvm-commits
Differential Revision: https://reviews.llvm.org/D22534
llvm-svn: 276056
Summary:
The DEBUG message was hard to read because two Values were being printed
on the same line with only the delimiter "aliases". This change makes
us print each Value on its own line.
Reviewers: asbirlea
Subscribers: llvm-commits, arsenm, mzolotukhin
Differential Revision: https://reviews.llvm.org/D22533
llvm-svn: 276055
Summary:
This helps keep us honest -- there were a number of ways we could screw
up and still have passed this test.
Reviewers: asbirlea
Subscribers: llvm-commits, arsenm
Differential Revision: https://reviews.llvm.org/D22531
llvm-svn: 276053
If 2.5 ulp is acceptable, denormals are not required, and
isn't a reciprocal which will already be handled, replace
with a faster fdiv.
Simplify the lowering tests by using per function
subtarget features.
llvm-svn: 276051
This is a variant of scavengeRegister() that works for
enterBasicBlockEnd()/backward(). The benefit of the backward mode is
that it is not affected by incomplete kill flags.
This patch also changes
PrologEpilogInserter::doScavengeFrameVirtualRegs() to use the register
scavenger in backwards mode.
Differential Revision: http://reviews.llvm.org/D21885
llvm-svn: 276044
This adds two pieces:
- RegisterScavenger:::enterBasicBlockEnd() which behaves similar to
enterBasicBlock() but starts tracking at the end of the basic block.
- A RegisterScavenger::backward() method. It is subtly different
from the existing unprocess() method which only considers uses with
the kill flag set: If a value is dead at the end of a basic block with
a last use inside the basic block, unprocess() will fail to mark it as
live. However we cannot change/fix this behaviour because unprocess()
needs to perform the exact reverse operation of forward().
Differential Revision: http://reviews.llvm.org/D21873
llvm-svn: 276043
The pattern may look more obviously like a sext if written as:
define i32 @g(i16 %x) {
%zext = zext i16 %x to i32
%xor = xor i32 %zext, 32768
%add = add i32 %xor, -32768
ret i32 %add
}
We already have that fold in visitAdd().
Differential Revision: https://reviews.llvm.org/D22477
llvm-svn: 276035
This patch adds function summary support to CFLAnders. It also comes
with a lot of tests! Woohoo!
Patch by Jia Chen.
Differential Revision: https://reviews.llvm.org/D22450
llvm-svn: 276026
This step builds on Lang Hames work to change Archive::child_iterator
for better interoperation with Error/Expected. Building on that it is now
possible to return an error message when the size field of an archive
contains non-decimal characters.
llvm-svn: 276025
This patch adds more specific edges to CFLAndersAliasAnalysis. The goal
of these edges is to give us more information about *how* two values
that MayAlias alias. With this, we can now tell cases like
a = b; // ergo, a may alias b
apart from
a = c;
b = c;
// so, a may alias b, but only because they were both assigned to c.
...And others.
Patch by Jia Chen.
Differential Revision: https://reviews.llvm.org/D22429
llvm-svn: 276023
This makes sure that space is actually available. With this change
running lld on a full file system causes it to exit with
failed to open foo: No space left on device
instead of crashing with a sigbus.
llvm-svn: 276017
r274801 did not go far enough to allow gcov+tsan to cooperate. With this
commit it's possible to run the following code without false positives:
std::thread T1(fib), T2(fib);
T1.join(); T2.join();
llvm-svn: 276015
There's not much functional change, but it really is an architectural feature
(on v6T2, v7A, v7R and v7EM) rather than something each CPU implements
individually.
The main functional change is the default behaviour you get when specifying
only "-triple".
llvm-svn: 276013
Also verify that we never try to set the size of a vreg associated
to a register class.
Report an error when we encounter that in MIR. Fix a testcase that
hit that error and had a size for no reason.
llvm-svn: 276012
We skipped over ReturnInsts which didn't return an argument which would
lead us to incorrectly conclude that an argument returned by another
ReturnInst was 'returned'.
This reverts commit r275756.
This fixes PR28610.
llvm-svn: 276008
Summary:
Currently, InstCombine is already able to fold expressions of the form `logic(cast(A), cast(B))` to the simpler form `cast(logic(A, B))`, where logic designates one of `and`/`or`/`xor`. This transformation is implemented in `foldCastedBitwiseLogic()` in InstCombineAndOrXor.cpp. However, this optimization will not be performed if both `A` and `B` are `icmp` instructions. The decision to preclude casts of `icmp` instructions originates in r48715 in combination with r261707, and can be best understood by the title of the former one:
> Transform (zext (or (icmp), (icmp))) to (or (zext (cimp), (zext icmp))) if at least one of the (zext icmp) can be transformed to eliminate an icmp.
Apparently, it introduced a transformation that is a reverse of the transformation that is done in `foldCastedBitwiseLogic()`. Its purpose is to expose pairs of `zext icmp` that would subsequently be optimized by `transformZExtICmp()` in InstCombineCasts.cpp. Therefore, in order to avoid an endless loop of switching back and forth between these two transformations, the one in `foldCastedBitwiseLogic()` has been restricted to exclude `icmp` instructions which is mirrored in the responsible check:
`if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) && ...`
This check seems to sort out more cases than necessary because:
- the reverse transformation is obviously done for `or` instructions only
- and also not every `zext icmp` pair is necessarily the result of this reverse transformation
Therefore we now remove this check and replace it by a more finegrained one in `shouldOptimizeCast()` that now rejects only those `logic(zext(icmp), zext(icmp))` that would be able to be optimized by `transformZExtICmp()`, which also avoids the mentioned endless loop. That means we are now able to also simplify expressions of the form `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` (`cast` being an arbitrary `CastInst`).
As an example, consider the following IR snippet
```
%1 = icmp sgt i64 %a, %b
%2 = zext i1 %1 to i8
%3 = icmp slt i64 %a, %c
%4 = zext i1 %3 to i8
%5 = and i8 %2, %4
```
which would now be transformed to
```
%1 = icmp sgt i64 %a, %b
%2 = icmp slt i64 %a, %c
%3 = and i1 %1, %2
%4 = zext i1 %3 to i8
```
This issue became apparent when experimenting with the programming language Julia, which makes use of LLVM. Currently, Julia lowers its `Bool` datatype to LLVM's `i8` (also see https://github.com/JuliaLang/julia/pull/17225). In fact, the above IR example is the lowered form of the Julia snippet `(a > b) & (a < c)`. Like shown above, this may introduce `zext` operations, casting between `i1` and `i8`, which could for example hinder ScalarEvolution and Polly on certain code.
Reviewers: grosser, vtjnash, majnemer
Subscribers: majnemer, llvm-commits
Differential Revision: https://reviews.llvm.org/D22511
Contributed-by: Matthias Reisinger
llvm-svn: 275989
Only if the value is negative or positive is what matters,
so use a constant that doesn't require an instruction to
materialize.
These should really just emit the write exec directly,
but for stick with the kill pseudo-terminator.
llvm-svn: 275988
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.
It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).
This patch changes both scalar and packed versions back to using x86-specific builtins.
It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.
A companion clang patch is at D22105
Differential Revision: https://reviews.llvm.org/D22106
llvm-svn: 275981
Recommitting after r274347 was reverted. This patch introduces some
classes to refactor the 3 and 4 register Thumb2 multiplication
instruction descriptions, plus improved tests for some of those
instructions.
Differential Revision: https://reviews.llvm.org/D21929
llvm-svn: 275979
The standard local dynamic model for TLS on ARM systems needs two
relocations:
- R_ARM_TLS_LDM32 (module idx)
- R_ARM_TLS_LDO32 (offset of object from origin of module TLS block)
In GNU style assembler we use symbol(tlsldm) and symbol(tlsldo) to
produce these relocations.
llvm-mc for ARM supports symbol(tlsldo) but does not support symbol(tlsldm).
This patch wires up the existing symbol(tlsldm) to R_ARM_TLS_LDM32.
TLS for ARM is defined in Addenda to, and Errata in, the ABI for the
ARM Architecture
Differential Revision: https://reviews.llvm.org/D22461
llvm-svn: 275977
As discussed on PR27654, this patch fixes the triples of a lot of aarch64 tests and enables lit tests on windows
This will hopefully help stop cases where windows developers break the aarch64 target
Differential Revision: https://reviews.llvm.org/D22191
llvm-svn: 275973
Summary:
N32 and N64 follow the standard ELF conventions (.L) whereas O32 uses its own
($).
This fixes the majority of object differences between -fintegrated-as and
-fno-integrated-as.
Reviewers: sdardis
Subscribers: dsanders, sdardis, llvm-commits
Differential Revision: https://reviews.llvm.org/D22412
llvm-svn: 275967
Summary:
The triple used for this distribution is mips64el-linux-gnuabi64.
Reviewers: sdardis
Subscribers: sdardis, llvm-commits
Differential Revision: https://reviews.llvm.org/D22406
llvm-svn: 275966
Summary:
This patch cleans up parts of InstCombine to raise its compliance with the LLVM coding standards and to increase its readability. The changes and according rationale are summarized in the following:
- Rename `ShouldOptimizeCast()` to `shouldOptimizeCast()` since functions should start with a lower case letter.
- Move `shouldOptimizeCast()` from InstCombineCasts.cpp to InstCombineAndOrXor.cpp since it's only used there.
- Simplify interface of `shouldOptimizeCast()`.
- Minor code style adaptions in `shouldOptimizeCast()`.
- Remove the documentation on the function definition of `shouldOptimizeCast()` since it just repeats the documentation on its declaration. Also enhance the documentation on its declaration with more information describing its intended use and make it doxygen-compliant.
- Change a comment in `foldCastedBitwiseLogic()` from `fold (logic (cast A), (cast B)) -> (cast (logic A, B))` to `fold logic(cast(A), cast(B)) -> cast(logic(A, B))` since the surrounding comments use this format.
- Remove comment `Only do this if the casts both really cause code to be generated.` in `foldCastedBitwiseLogic()` since it just repeats parts of the documentation of `shouldOptimizeCast()` and does not help to improve readability.
- Simplify the interface of `isEliminableCastPair()`.
- Removed the documentation on the function definition of `isEliminableCastPair()` which only contained obvious statements about its implementation. Instead added more general doxygen-compliant documentation to its declaration.
- Renamed parameter `DoXform` of `transformZExtIcmp()` to `DoTransform` to make its intention clearer.
- Moved documentation of `transformZExtIcmp()` from its definition to its declaration and made it doxygen-compliant.
Reviewers: vtjnash, grosser
Subscribers: majnemer, llvm-commits
Differential Revision: https://reviews.llvm.org/D22449
Contributed-by: Matthias Reisinger
llvm-svn: 275964
The following condition expression ( a >> n) & 1 is converted to "bt a, n" instruction. It works on all intel targets.
But on AVX-512 it was broken because the expression is modified to (truncate (a >>n) to i1).
I added the new sequence (truncate (a >>n) to i1) to the BT pattern.
Differential Revision: https://reviews.llvm.org/D22354
llvm-svn: 275950
This patch updates MemorySSA's use-optimizing walker to be more
accurate and, in some cases, faster.
Essentially, this changed our core walking algorithm from a
cache-as-you-go DFS to an iteratively expanded DFS, with all of the
caching happening at the end. Said expansion happens when we hit a Phi,
P; we'll try to do the smallest amount of work possible to see if
optimizing above that Phi is legal in the first place. If so, we'll
expand the search to see if we can optimize to the next phi, etc.
An iteratively expanded DFS lets us potentially quit earlier (because we
don't assume that we can optimize above all phis) than our old walker.
Additionally, because we don't cache as we go, we can now optimize above
loops.
As an added bonus, this patch adds a ton of verification (if
EXPENSIVE_CHECKS are enabled), so finding bugs is easier.
Differential Revision: https://reviews.llvm.org/D21777
llvm-svn: 275940
Add a "-j" option to llvm-profdata to control the number of threads used.
Auto-detect NumThreads when it isn't specified, and avoid spawning threads when
they wouldn't be beneficial.
I tested this patch using a raw profile produced by clang (147MB). Here is the
time taken to merge 4 copies together on my laptop:
No thread pool: 112.87s user 5.92s system 97% cpu 2:01.08 total
With 2 threads: 134.99s user 26.54s system 164% cpu 1:33.31 total
Changes since the initial commit:
- When handling odd-length inputs, call ThreadPool::wait() before merging the
last profile. Should fix a race/off-by-one (see r275937).
Differential Revision: https://reviews.llvm.org/D22438
llvm-svn: 275938
For instructions in uniform set, they will not have vector versions so
add them to VecValuesToIgnore.
For induction vars, those only used in uniform instructions or consecutive
ptrs instructions have already been added to VecValuesToIgnore above. For
those induction vars which are only used in uniform instructions or
non-consecutive/non-gather scatter ptr instructions, the related phi and
update will also be added into VecValuesToIgnore set.
The change will make the vector RegUsages estimation less conservative.
Differential Revision: https://reviews.llvm.org/D20474
The recommit fixed the testcase global_alias.ll.
llvm-svn: 275936
This is to help moveSILowerControlFlow to before regalloc.
There are a couple of tradeoffs with this. The complete CFG
is visible to more passes, the loop body avoids an extra copy of m0,
vcc isn't required, and immediate offsets can be shrunk into s_movk_i32.
The disadvantage is the register allocator doesn't understand that
the single lane's vector is dead within the loop body, so an extra
register is used to outlive the loop block when expanding the
VGPR -> m0 loop. This also now results in worse waitcnt insertion
before the loop instead of after for pending operations at the point
of the indexing, but that should be fixed by future improvements to
cross block waitcnt insertion.
v_movreld_b32's operands are now modeled more correctly since vdst
is not a true output. This is kind of a hack to treat vdst as a
use operand. Extra checking is required in the verifier since
I can't seem to get tablegen to emit an implicit operand for a
virtual register.
llvm-svn: 275934
Add an overview of stubs and compile callbacks before the discussion of the
source changes.
-- This line, and those below, will be ignored--
M docs/tutorial/BuildingAJIT3.rst
llvm-svn: 275933
This is for a situation where the encoding for a register may be
different depending on the specific operand. For some instructions,
we want to apply additional restrictions beyond the encoding's
constraints.
In AMDGPU some operands are VSrc_32, using the VS_32 pseudo register
class which accept VGPRs, SGPRs, or immediates in the encoding.
Some specific instructions with the same encoding operand do not want
to allow immediates or SGPRs, but the encoding format is different
in this case than a regular VGPR_32 operand.
This allows specifying the encoding should be treated the same
without introducing yet another dummy register class.
llvm-svn: 275929
Instead of extracting raw coverage mappings into an artifact directory,
actually generate useful html reports for a given list of binaries with
symbol demangling turned on.
No tests, but this is actively being used to drive the (still nascent)
coverage bot.
llvm-svn: 275927
Add a "-j" option to llvm-profdata to control the number of threads
used. Auto-detect NumThreads when it isn't specified, and avoid spawning
threads when they wouldn't be beneficial.
I tested this patch using a raw profile produced by clang (147MB). Here is the
time taken to merge 4 copies together on my laptop:
No thread pool: 112.87s user 5.92s system 97% cpu 2:01.08 total
With 2 threads: 134.99s user 26.54s system 164% cpu 1:33.31 total
Differential Revision: https://reviews.llvm.org/D22438
llvm-svn: 275921
For instructions in uniform set, they will not have vector versions so
add them to VecValuesToIgnore.
For induction vars, those only used in uniform instructions or consecutive
ptrs instructions have already been added to VecValuesToIgnore above. For
those induction vars which are only used in uniform instructions or
non-consecutive/non-gather scatter ptr instructions, the related phi and
update will also be added into VecValuesToIgnore set.
The change will make the vector RegUsages estimation less conservative.
Differential Revision: https://reviews.llvm.org/D20474
llvm-svn: 275912
Summary:
Per D22441, MSVC warns on our old implementation of isUInt<64>. It sees
uint64_t(1) << 64 and doesn't realize that it's not going to be
executed. Writing as a template specialization is ugly, but prevents
the warning.
Reviewers: RKSimon
Subscribers: majnemer, llvm-commits
Differential Revision: https://reviews.llvm.org/D22472
llvm-svn: 275909
This doesn't seem to work with Bash:
$ /work/llvm/utils/release/merge.sh --proj llvm --rev r275870
/work/llvm/utils/release/merge.sh: line 34: ${$1#r}: bad substitution
I get the same error with and without a leading 'r'.
llvm-svn: 275898
Taking address of a byval variable in PTX is legal, but currently runs
into miscompilation by ptxas on sm_50+ (NVIDIA issue 1789042).
Work around the issue by enforcing minimum alignment on byval arguments
of device functions.
The change is a no-op on SASS level for sm_3x where ptxas already aligns
local copy by at least 4.
Differential Revision: https://reviews.llvm.org/D22428
llvm-svn: 275893
Summary:
Usually LCSSA survives this transformation, but in some cases (see
attached test) it doesn't: values from the original loop after
separating might be used from the outer loop. Before the transformation
it was the same loop, so LCSSA phis were not required.
This fixes PR28272.
Reviewers: sanjoy, hfinkel, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D21665
llvm-svn: 275891
This is currently only called with GEP users. A direct
alloca would only happen with current typed pointers
for arrays which are a perverse case.
Also fix crashes on 0 x and 1 x arrays.
llvm-svn: 275869
Elsewhere (particularly computeKnownBits) we assume that a global will be
aligned to the value returned by Value::getPointerAlignment. This is used to
boost the alignment on memcpy/memset, so any target-specific request can only
increase that value.
llvm-svn: 275866
DAGTypeLegalizer::CanSkipSoftenFloatOperand should allow
SELECT op code for x86_64 fp128 type for MME targets,
so SoftenFloatOperand does not abort on SELECT op code.
Differential Revision: http://reviews.llvm.org/D21758
llvm-svn: 275818
We negated a value with a signed type which invited problems when that
value was the most negative signed number. Use an unsigned type
for the value instead. It will compute the same twos complement
result without the UB.
llvm-svn: 275815
Summary:
The direct motivation for the port is to ensure that the OptRemarkEmitter
tests work with the new PM.
This remains a function pass because we not only create multiple loops
but could also version the original loop.
In the test I need to invoke opt
with -passes='require<aa>,loop-distribute'. LoopDistribute does not
directly depend on AA however LAA does. LAA uses getCachedResult so
I *think* we need manually pull in 'aa'.
Reviewers: davidxl, silvas
Subscribers: sanjoy, llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D22437
llvm-svn: 275811
Summary:
The main goal is to able to start using the new OptRemarkEmitter
analysis from the LoopVectorizer. Since the vectorizer was recently
converted to the new PM, it makes sense to convert this analysis as
well.
This pass is currently tested through the LoopDistribution pass, so I am
also porting LoopDistribution to get coverage for this analysis with the
new PM.
Reviewers: davidxl, silvas
Subscribers: llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D22436
llvm-svn: 275810
Schedule a load and its use in the same packet in MISched. Previously,
isResourceAvailable was returning false for dependences in the same
packet, which prevented MISched from packetizing a load and its use in
the same packet for v60.
Patch by Ikhlas Ajbar.
llvm-svn: 275804
This patch corresponds to review:
https://reviews.llvm.org/D21354
We use direct moves for extracting integer elements from vectors. We also use
direct moves when converting integers to FP. When these operations are chained,
we get a direct move out of a VSR followed by a direct move back into a VSR.
These are redundant - all we need to do is line up the element and convert.
llvm-svn: 275796
Add parseToken and compatriot functions to stitch error checks in
straight linear code. As part of this fix some erronous handling of
directives where the EndOfStatement token either was not checked or
Lexed on termination.
Reviewers: rnk, majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22312
llvm-svn: 275795
The machine scheduler needs to account for available resources
more accurately in order to avoid scheduling an instruction that
forces a new packet to be created.
This occurs in two ways: First, an instruction without an available
resource may have a large priority due to other metrics and be
scheduled when there are other instructions with available resources.
Second, an instruction with a non-zero latency may become available
prematurely. In both these cases, we attempt change the priority
in order to allow a better instruction to be scheduled.
Patch by Brendon Cahoon.
llvm-svn: 275793
An instruction may have multiple predecessors that are candidates
for using .cur. However, only one of them can use .cur in the
packet. When this case occurs, we need to make sure that only
one of the dependences gets a 0 latency value.
Patch by Brendon Cahoon.
llvm-svn: 275790
When SelectionDAGISel transforms a node representing an inline asm
block, memory constraint information is not preserved. This can cause
constraints to be broken when a memory offset is of the form:
offset + frame index
when the frame is resolved.
By propagating the constraints all the way to the backend, targets can
enforce memory operands of inline assembly to conform to their constraints.
For MIPSR6, some instructions had their offsets reduced to 9 bits from
16 bits such as ll/sc. This becomes problematic when using inline assembly
to perform atomic operations, as an offset can generated that is too big to
encode in the instruction.
Reviewers: dsanders, vkalintris
Differential Review: https://reviews.llvm.org/D21615
llvm-svn: 275786
Summary:
The work item intrinsics are not available for the shader
calling conventions. And even if we did hook them up most
shader stages haves some extra restrictions on the amount
of available LDS.
Reviewers: tstellarAMD, arsenm
Subscribers: nhaehnle, arsenm, llvm-commits, kzhuravl
Differential Revision: https://reviews.llvm.org/D20728
llvm-svn: 275779
The current logic for handling inline asm operands in DAGToDAGISel interprets
the operands by looking for constants, which should represent the flags
describing the kind of operand we're dealing with (immediate, memory, register
def etc). The operands representing actual data are skipped only if they are
non-const, with the exception of immediate operands which are skipped explicitly
when a flag describing an immediate is found.
The oversight is that memory operands may be const too (e.g. for device drivers
reading a fixed address), so we should explicitly skip the operand following a
flag describing a memory operand. If we don't, we risk interpreting that
constant as a flag, which is definitely not intended.
Fixes PR26038
Differential Revision: https://reviews.llvm.org/D22103
llvm-svn: 275776
At higher optimization levels, we generate the libcall for DIVREM_Ix, which is
fine: aeabi_{u|i}divmod. At -O0 we generate the one for REM_Ix, which is the
default {u}mod{q|h|s|d}i3.
This commit makes sure that we don't generate REM_Ix calls for ABIs that
don't support them (i.e. where we need to use DIVREM_Ix instead). This is
achieved by bailing out of FastISel, which can't handle non-double multi-reg
returns, and letting the legalization infrastructure expand the REM_Ix calls.
It also updates the divmod-eabi.ll test to run under -O0 as well, and adds some
Windows checks to it to make sure we don't break things for it.
Fixes PR27068
Differential Revision: https://reviews.llvm.org/D21926
llvm-svn: 275773
While debugging GVNHoist, I found it confusing that the entries in a
VNtoInsns were not always value numbers. They _usually_ were except for
StoreInst in which case they were a hash of two different value numbers.
This leads to two observations:
- It is more difficult to debug things when the semantic contents of
VNtoInsns changes over time.
- Using a single value number is not much cheaper, the value of
VNtoInsns is a SmallVector.
- It is not immediately clear what the algorithm would do if there were
hash collisions in the StoreInst case.
Using a DenseMap of std::pair sidesteps all of this.
N.B. The changes in the test were due their sensitivity to the
iteration order of VNtoInsns which has changed.
llvm-svn: 275761
Don't make the test/tools/llvm-cov/demangle.test depend on the order in
which symbols are seen, or on the exact formatting llvm-cov emits after
a symbol is printed. This is an attempt to fix a Windows bot failure:
http://lab.llvm.org:8011/builders/clang-x86-win2008-selfhost/builds/9141
I don't know what the root cause of the failure is, or why the
showTemplateInstantiations test doesn't fail in the same way on the
Windows bots. However, this measure can't hurt, and it'll at least get
me on the blamelists again.
llvm-svn: 275758
This reverts also r275029, "Update Clang tests after adding inference for the returned argument attribute"
It broke LTO build. Seems miscompilation.
llvm-svn: 275756
Summary:
Given that we had a bug on max/minUIntN(64), these should have tests
too.
Reviewers: rnk
Subscribers: dylanmckay, llvm-commits
Differential Revision: https://reviews.llvm.org/D22443
llvm-svn: 275723
Summary:
Previously we were relying on 2's complement underflow in an int64_t.
Now we cast to a uint64_t so we explicitly get the behavior we want.
Reviewers: rnk
Subscribers: dylanmckay, llvm-commits
Differential Revision: https://reviews.llvm.org/D22445
llvm-svn: 275722
Summary:
The bit width must be greater than zero, otherwise we shift by the
integer's width, which is UB. Also (more obviously) the width must be
less than or equal to the integer's width, otherwise we shift by a
negative number, which is also UB.
Reviewers: rnk
Subscribers: llvm-commits, dylanmckay
Differential Revision: https://reviews.llvm.org/D22442
llvm-svn: 275720
Summary:
Previously we were doing 1 << S. "1" is an int, so this doesn't work
when S >= 32.
This patch also adds some static_asserts to these functions to ensure
that we don't hit UB by shifting left too much.
Reviewers: rnk
Subscribers: llvm-commits, dylanmckay
Differential Revision: https://reviews.llvm.org/D22441
llvm-svn: 275719
Doing "I++" inside of an EXPECT_* triggers
warning: expression with side effects has no effect in an unevaluated context
because EXPECT_* partially expands to
EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(i++)) == 1)>
which is an unevaluated context.
llvm-svn: 275717