Vectorization cost of uniform load wasn't correctly calculated.
As a result, a simple loop that loads a uniform value wasn't vectorized.
Differential Revision: http://reviews.llvm.org/D18940
llvm-svn: 265901
Raw function pointer collected by value
profile data may be from external functions
that are not instrumented. They won't have
mapping data to be used by the deserializer.
Force the value to be 0 in this case.
llvm-svn: 265890
Summary:
After we make the adjustment, we can assume that for local allocas, but
not for stack parameters, the return address, or any other fixed stack
object (which has a negative offset and therefore lies prior to the
adjusted SP).
Fixes PR26662.
Reviewers: hfinkel, qcolombet, rnk
Subscribers: rnk, llvm-commits
Differential Revision: http://reviews.llvm.org/D18471
llvm-svn: 265886
Broken in D18938 because underlying_type only works for enums and not all stdlibs are sad when given a non-enum. Bots error out with 'only enumeration types have underlying types'.
There's probably a clever enable_if-ism that I can do with underlying_type and the actual integer value, but is_integral_or_enum also accepts implicit conversion so I need to ponder my life choices a bit before committing to template magic. A quick fix for now.
llvm-svn: 265880
Summary:
As discussed in D18775 making AtomicOrdering an enum class makes it non-hashable, which shouldn't be the case. Hashing.h defines hash_value for all is_integral_or_enum, but type_traits.h's definition of is_integral_or_enum only checks for *inplicit* conversion to integral types which leaves enum classes out and is very confusing because is_enum is true for enum classes.
This patch:
- Adds a check for is_enum when determining is_integral_or_enum.
- Explicitly converts the value parameter in hash_value to handle enum class hashing.
Note that the warning at the top of Hashing.h still applies: each execution of the program has a high probability of producing a different hash_code for a given input. Thus their values are not stable to save or persist, and should only be used during the execution for the construction of hashing datastructures.
Reviewers: dberlin, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18938
llvm-svn: 265879
This patch drops the debug info for all DISubprograms that are
(a) not attached to an llvm::Function and
(b) not indirectly reachable via inline scopes from any surviving Function and
(c) not reachable from a type (i.e.: member functions).
Background: I'm currently working on a patch to reverse the pointers
between DICompileUnit and DISubprogram (for more info check Duncan's RFC
on lazy-loading of debug info metadata
http://lists.llvm.org/pipermail/llvm-dev/2016-March/097419.html).
The idea is to remove the list of subprograms from DICompileUnit and
instead point to the owning compile unit from each DISubprogram.
After doing this all DISubprograms fulfilling the above criteria will be
implicitly dropped unless we go through an extra effort to preserve them.
http://reviews.llvm.org/D18477
<rdar://problem/25256815>
llvm-svn: 265876
This patch adds support for decoding XOP VPPERM instruction when it represents a basic shuffle.
The mask decoding required the existing MCInstrLowering code to be updated to support binary shuffles - the implementation now matches what is done in X86InstrComments.cpp.
Differential Revision: http://reviews.llvm.org/D18441
llvm-svn: 265874
This makes it so that when running 'ninja test-suite' from the top-level LLVM ninja build it *always* re-runs the ninja command in the test-suite directory.
This mechanism is required because the top-level ninja file doesn't have a view into the subdirectory dependency tree, so it can't know what, if anything, needs to be rebuilt.
llvm-svn: 265863
Sample-based profiling and optimization remarks currently remove
DICompileUnits from llvm.dbg.cu to suppress the emission of debug info
from them. This is somewhat of a hack and only borderline legal IR.
This patch uses the recently introduced NoDebug emission kind in
DICompileUnit to achieve the same result without breaking the Verifier.
A nice side-effect of this change is that it is now possible to combine
NoDebug and regular compile units under LTO.
http://reviews.llvm.org/D18808
<rdar://problem/25427165>
llvm-svn: 265861
This is a cleanup patch for SSP support in LLVM. There is no functional change.
llvm.stackprotectorcheck is not needed, because SelectionDAG isn't
actually lowering it in SelectBasicBlock; rather, it adds check code in
FinishBasicBlock, ignoring the position where the intrinsic is inserted
(See FindSplitPointForStackProtector()).
llvm-svn: 265851
This is in preparation for tail duplication during block placement. See D18226.
This needs to be a utility class for 2 reasons. No passes may run after block
placement, and also, tail-duplication affects subsequent layout decisions, so
it must be interleaved with placement, and can't be separated out into its own
pass. The original pass is still useful, and now runs by delegating to the
utility class.
llvm-svn: 265842
In Memcpy lowering we had missed a dependence from the load of the
operation to successor operations. This causes us to potentially
construct an in initial DAG with a memory dependence not fully
represented in the chain sub-DAG but rather require looking at the
entire DAG breaking alias analysis by allowing incorrect repositioning
of memory operations.
To work around this, r200033 changed DAGCombiner::GatherAllAliases to be
conservative if any possible issues to happen. Unfortunately this check
forbade many non-problematic situations as well. For example, it's
common for incoming argument lowering to add a non-aliasing load hanging
off of EntryNode. Then, if GatherAllAliases visited EntryNode, it would
find that other (unvisited) use of the EntryNode chain, and just give up
entirely. Furthermore, the check was incomplete: it would not actually
detect all such potentially problematic DAG constructions, because
GatherAllAliases did not guarantee to visit all chain nodes going up to
the root EntryNode. This is in general fine -- giving up early will just
miss a potential optimization, not generate incorrect results. But, for
this non-chain dependency detection code, it's possible that you could
have a load attached to a higher-up chain node than any which were
visited. If that load aliases your store, but the only dependency is
through the value operand of a non-aliasing store, it would've been
missed by this code, and potentially reordered.
With the dependence added, this check can be removed and Alias Analysis
can be much more aggressive. This fixes code quality regression in the
Consecutive Store Merge cleanup (D14834).
Test Change:
ppc64-align-long-double.ll now may see multiple serializations
of its stores
Differential Revision: http://reviews.llvm.org/D18062
llvm-svn: 265836
Strip out the remapping parts of IRLinker::linkFunctionBody and put them
in ValueMapper.cpp under the name Mapper::remapFunction (with a
top-level entry-point llvm::RemapFunction).
This is a nice cleanup on its own since it puts the remapping code
together and shares a single Mapper context for the entire
IRLinker::linkFunctionBody Call. Besides that, this will make it easier
to break the co-recursion between IRMover.cpp and ValueMapper.cpp in
follow ups.
llvm-svn: 265835
Use Mapper::mapValue instead of llvm::MapValue from
Mapper::remapInstruction when mapping an incoming block for a PHINode
(follow-up to r265832). This will implicitly pass along the
Materializer argument, but when this code was added in r133513 there was
no Materializer argument. I suspect this call to MapValue was just
missed in r182776 since it's not observable (basic blocks can't be
materialized, and they don't reference other values).
llvm-svn: 265833
Add Mapper::remapInstruction, move the guts of llvm::RemapInstruction
into it, and use the same Mapper for most of the calls to MapValue and
MapMetadata. There should be no functionality change here.
I left off the call to MapValue that wasn't passing in a Materializer
argument (for basic blocks of PHINodes). It shouldn't change
functionality either, but I'm suspicious enough to commit separately.
llvm-svn: 265832
This is a cleanup after clarifying the meaning of RF_IgnoreMissingLocals
in r265628 and truly limiting it to locals in r265768.
This should have no functionality change, since the only context that
the flag has an effect is when we could hit function-local Value and
Metadata, and we were already passing it in those contexts.
llvm-svn: 265831
Prevent the Metadata side-table in ValueMap from growing unnecessarily
when RF_NoModuleLevelChanges. As a drive-by, make ValueMap::hasMD,
which apparently had no users until I used it here for testing, actually
compile.
llvm-svn: 265828
Stop adding MDString to the Metadata section of the ValueMap in
MapMetadata. It blows up the size of the map for no benefit, since we
can always return quickly anyway.
There is a potential follow-up that I don't think I'll push on right
away, but maybe someone else is interested: stop checking for a
pre-mapped MDString, and move the `isa<MDString>()` checks in
Mapper::mapSimpleMetadata and MDNodeMapper::getMappedOp in front of the
`VM.getMappedMD()` calls. While this would preclude explicitly
remapping MDStrings it would probably be a little faster.
llvm-svn: 265827
Summary:
The llvm cos intrinsic currently does not propagate undef's. This change
transforms cos(undef) to null value or 0.
There are 2 test cases added as well.
Patch by Anna Thomas!
Reviewers: sanjoy
Subscribers: majnemer, llvm-commits
Differential Revision: http://reviews.llvm.org/D18863
llvm-svn: 265825
This adds a conditional variant of CallJG instruction, CallBRCL.
It can be used for conditional sibling calls. Unfortunately, due
to IfCvt limitations, it only really works well for functions without
arguments.
Author: koriakin
Differential Revision: http://reviews.llvm.org/D18864
llvm-svn: 265814
When assigning the register banks of an instruction, it is best to know
all the constraints of the input to have a good idea of how this will
impact the cost of the whole function.
llvm-svn: 265812
Do not give that much importance to the current register bank of an
operand. This is likely just a side effect of the current execution and
it is properly wise to prefer a register bank that can be extracted from
the information available statically (like encoding constraints and
type).
llvm-svn: 265810
We had a select of a cast of a select but attempted to replace the outer
select with the inner select dispite their incompatible types.
Patch by Anton Korobeynikov!
This fixes PR27236.
llvm-svn: 265805
Add verbose information when checking if the current and the desired
register banks match.
Detail what happens when we assign a register bank.
llvm-svn: 265804
Two or more identical assumes are occasionally next to each other in a
basic block.
While our generic machinery will turn a redundant assume into a no-op,
it is not super cheap.
We can perform a simpler check to achieve the same result for this case.
llvm-svn: 265801
InstCombine cannot effectively remove redundant assumptions without them
registered in the assumption cache. The vectorizer can create identical
assumptions but doesn't register them with the cache, resulting in
slower compile times because InstCombine tries to reason about a lot
more assumptions.
Fix this by registering the cloned assumptions.
llvm-svn: 265800
virtual registers.
Generic virtual registers:
- May not have a register class
- May not have a register bank
- If they do not have a register class they must have a size
- If they have a register bank, the size of the register bank must be
greater or equal to the size of the virtual register (basically check
that the virtual register will fit into that register class)
llvm-svn: 265798
For now, we put the register bank in the Class field since a register
may only have one of those at a given time. The downside of that
representation is that if a register class and a register bank have the
same name, we will not be able to distinguish them.
llvm-svn: 265796
Added ISelDAGToDAG functions to enable selection of the smlawb, smlawt,
smulwb and smulwt instructions for the ARM backend. Also updated the smul
CodeGen test and removed the smulw one.
Differential Revision: http://reviews.llvm.org/D18892
llvm-svn: 265793
It caused PR27275: "ARM: Bad machine code: Using an undefined physical register"
Also reverting the following commits that were landed on top:
r265610 "Fix the compare-clang diff error introduced by r265547."
r265639 "Fix the sanitizer bootstrap error in r265547."
r265657 "InlineSpiller.cpp: Escap \@ in r265547. [-Wdocumentation]"
llvm-svn: 265790
This re-commits r265535 which was reverted in r265541 because it
broke the windows bots. The problem was that we had a PointerIntPair
which took a pointer to a struct allocated with new. The problem
was that new doesn't provide sufficient alignment guarantees.
This pattern was already present before r265535 and it just happened
to work. To fix this, we now separate the PointerToIntPair from the
ExitNotTakenInfo struct into a pointer and a bool.
Original commit message:
Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.
However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.
In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.
We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.
Reviewers: anemet, mzolotukhin, hfinkel, sanjoy
Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17201
llvm-svn: 265786
As discussed on D18441 - auto brief is used so we don't need /brief, we don't need to include the function name and added some missing descriptions.
llvm-svn: 265785
This is the same change on PPC64 as r255821 on AArch64. I have even borrowed
his commit message.
The access function has a short entry and a short exit, the initialization
block is only run the first time. To improve the performance, we want to
have a short frame at the entry and exit.
We explicitly handle most of the CSRs via copies. Only the CSRs that are not
handled via copies will be in CSR_SaveList.
Frame lowering and prologue/epilogue insertion will generate a short frame
in the entry and exit according to CSR_SaveList. The majority of the CSRs will
be handled by register allcoator. Register allocator will try to spill and
reload them in the initialization block.
We add CSRsViaCopy, it will be explicitly handled during lowering.
1> we first set FunctionLoweringInfo->SplitCSR if conditions are met (the target
supports it for the given machine function and the function has only return
exits). We also call TLI->initializeSplitCSR to perform initialization.
2> we call TLI->insertCopiesSplitCSR to insert copies from CSRsViaCopy to
virtual registers at beginning of the entry block and copies from virtual
registers to CSRsViaCopy at beginning of the exit blocks.
3> we also need to make sure the explicit copies will not be eliminated.
Author: Tom Jablin (tjablin)
Reviewers: hfinkel kbarton cycheng
http://reviews.llvm.org/D17533
llvm-svn: 265781
It seems that llc cannot be called used in assembler tests so test that
checks asm for particular target needs to be moved to codegen.
llvm-svn: 265770
This reverts commit r265765, reapplying r265759 after changing a call from
LocalAsMetadata::get to ValueAsMetadata::get (and adding a unit test). When a
local value is mapped to a constant (like "i32 %a" => "i32 7"), the new debug
intrinsic operand may no longer be pointing at a local.
http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/19020/
The previous coommit message follows:
--
This is a partial re-commit -- maybe more of a re-implementation -- of
r265631 (reverted in r265637).
This makes RF_IgnoreMissingLocals behave (almost) consistently between
the Value and the Metadata hierarchy. In particular:
- MapValue returns nullptr or "metadata !{}" for missing locals in
MetadataAsValue/LocalAsMetadata bridging paris, depending on
the RF_IgnoreMissingLocals flag.
- MapValue doesn't memoize LocalAsMetadata-related results.
- MapMetadata no longer deals with LocalAsMetadata or
RF_IgnoreMissingLocals at all. (This wasn't in r265631 at all, but
I realized during testing it would make the patch simpler with no
loss of generality.)
r265631 went too far, making both functions universally ignore
RF_IgnoreMissingLocals. This broke building (e.g.) compiler-rt.
Reassociate (and possibly other passes) don't currently maintain
dominates-use invariants for metadata operands, resulting in IR like
this:
define void @foo(i32 %arg) {
call void @llvm.some.intrinsic(metadata i32 %x)
%x = add i32 1, i32 %arg
}
If the inliner chooses to inline @foo into another function, then
RemapInstruction will call `MapValue(metadata i32 %x)` and assert that
the return is not nullptr.
I've filed PR27273 to add a Verifier check and fix the underlying
problem in the optimization passes.
As a workaround, return `!{}` instead of nullptr for unmapped
LocalAsMetadata when RF_IgnoreMissingLocals is unset. Otherwise, match
the behaviour of r265631.
Original commit message:
ValueMapper: Make LocalAsMetadata match function-local Values
Start treating LocalAsMetadata similarly to function-local members of
the Value hierarchy in MapValue and MapMetadata.
- Don't memoize them.
- Return nullptr if they are missing.
This also cleans up ConstantAsMetadata to stop listening to the
RF_IgnoreMissingLocals flag.
llvm-svn: 265768
Summary:
Fixes PR26774.
If you're aware of the issue, feel free to skip the "Motivation"
section and jump directly to "This patch".
Motivation:
I define "refinement" as discarding behaviors from a program that the
optimizer has license to discard. So transforming:
```
void f(unsigned x) {
unsigned t = 5 / x;
(void)t;
}
```
to
```
void f(unsigned x) { }
```
is refinement, since the behavior went from "if x == 0 then undefined
else nothing" to "nothing" (the optimizer has license to discard
undefined behavior).
Refinement is a fundamental aspect of many mid-level optimizations done
by LLVM. For instance, transforming `x == (x + 1)` to `false` also
involves refinement since the expression's value went from "if x is
`undef` then { `true` or `false` } else { `false` }" to "`false`" (by
definition, the optimizer has license to fold `undef` to any non-`undef`
value).
Unfortunately, refinement implies that the optimizer cannot assume
that the implementation of a function it can see has all of the
behavior an unoptimized or a differently optimized version of the same
function can have. This is a problem for functions with comdat
linkage, where a function can be replaced by an unoptimized or a
differently optimized version of the same source level function.
For instance, FunctionAttrs cannot assume a comdat function is
actually `readnone` even if it does not have any loads or stores in
it; since there may have been loads and stores in the "original
function" that were refined out in the currently visible variant, and
at the link step the linker may in fact choose an implementation with
a load or a store. As an example, consider a function that does two
atomic loads from the same memory location, and writes to memory only
if the two values are not equal. The optimizer is allowed to refine
this function by first CSE'ing the two loads, and the folding the
comparision to always report that the two values are equal. Such a
refined variant will look like it is `readonly`. However, the
unoptimized version of the function can still write to memory (since
the two loads //can// result in different values), and selecting the
unoptimized version at link time will retroactively invalidate
transforms we may have done under the assumption that the function
does not write to memory.
Note: this is not just a problem with atomics or with linking
differently optimized object files. See PR26774 for more realistic
examples that involved neither.
This patch:
This change introduces a new set of linkage types, predicated as
`GlobalValue::mayBeDerefined` that returns true if the linkage type
allows a function to be replaced by a differently optimized variant at
link time. It then changes a set of IPO passes to bail out if they see
such a function.
Reviewers: chandlerc, hfinkel, dexonsmith, joker.eph, rnk
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18634
llvm-svn: 265762
This patch closes a gap in the DWARF backend that caused LLVM to drop
debug info for floating point variables that were constant for part of
their scope. Floating point constants are emitted as one or more
DW_OP_constu joined via DW_OP_piece.
This fixes a regression caught by the LLDB testsuite that I introduced
in r262247 when we stopped blindly expanding the range of singular
DBG_VALUEs to span the entire scope and started to emit location lists
with accurate ranges instead.
Also deletes a now-impossible testcase (debug-loc-empty-entries).
<rdar://problem/25448338>
llvm-svn: 265760
This is a partial re-commit -- maybe more of a re-implementation -- of
r265631 (reverted in r265637).
This makes RF_IgnoreMissingLocals behave (almost) consistently between
the Value and the Metadata hierarchy. In particular:
- MapValue returns nullptr or "metadata !{}" for missing locals in
MetadataAsValue/LocalAsMetadata bridging paris, depending on
the RF_IgnoreMissingLocals flag.
- MapValue doesn't memoize LocalAsMetadata-related results.
- MapMetadata no longer deals with LocalAsMetadata or
RF_IgnoreMissingLocals at all. (This wasn't in r265631 at all, but
I realized during testing it would make the patch simpler with no
loss of generality.)
r265631 went too far, making both functions universally ignore
RF_IgnoreMissingLocals. This broke building (e.g.) compiler-rt.
Reassociate (and possibly other passes) don't currently maintain
dominates-use invariants for metadata operands, resulting in IR like
this:
define void @foo(i32 %arg) {
call void @llvm.some.intrinsic(metadata i32 %x)
%x = add i32 1, i32 %arg
}
If the inliner chooses to inline @foo into another function, then
RemapInstruction will call `MapValue(metadata i32 %x)` and assert that
the return is not nullptr.
I've filed PR27273 to add a Verifier check and fix the underlying
problem in the optimization passes.
As a workaround, return `!{}` instead of nullptr for unmapped
LocalAsMetadata when RF_IgnoreMissingLocals is unset. Otherwise, match
the behaviour of r265631.
Original commit message:
ValueMapper: Make LocalAsMetadata match function-local Values
Start treating LocalAsMetadata similarly to function-local members of
the Value hierarchy in MapValue and MapMetadata.
- Don't memoize them.
- Return nullptr if they are missing.
This also cleans up ConstantAsMetadata to stop listening to the
RF_IgnoreMissingLocals flag.
llvm-svn: 265759
Now, recordRegBankForType records only the first register bank that
covers a type instead of the last. This behavior can, nevertheless, be
override with the additional Force parameter to force the update.
llvm-svn: 265741
TUs in each unit refer to the unit they are in, if the unit is moved
this reference is invalidated & things break.
No test case because UB isn't testable - ASan would likely catch this on
a large enough test case (just needs to have enough TUs that a
reallocation of the vector would occur) but didn't seem worthwhile. Up
for debate/revisiting if anyone feels strongly.
llvm-svn: 265740
specific type.
This will be used to find the default mapping of the instruction.
Also, this information is recorded, instead of computed, because it is
expensive from a type to know which register bank maps it.
Indeed, we need to iterate through all the register classes of all the
register banks to find the one that maps the given type.
llvm-svn: 265736
iterate over register class bitmask.
Thanks to this helper class, it would not require for each user of the
register classes bitmask to actually know how they are represents.
Moreover, it will make the code much easier to read.
llvm-svn: 265730
from a register.
On top of duplicating the logic, it was buggy! It would assert on
physical registers, since MachineRegisterInfo does not have any
information regarding register classes/banks for them.
llvm-svn: 265727
Summary:
EHPad BB are not entered the classic way and therefor do not need to be placed after their predecessors. This patch make sure EHPad BB are not chosen amongst successors to form chains, and are selected as last resort when selecting the best candidate.
EHPad are scheduled in reverse probability order in order to have them flow into each others naturally.
Reviewers: chandlerc, majnemer, rafael, MatzeB, escha, silvas
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17625
llvm-svn: 265726