This fixes a crash where the user is a COPY, which deliberately does not
constrain its source operands, resulting in a vreg without a reg class escaping
selection.
Differential Revision: https://reviews.llvm.org/D42697
llvm-svn: 324047
The one place that uses these functions isn't particularly
long/complicated, so it's easier to just have these inline at that
location than trying to split it out into a true header. (in part also
because of the use of the DEBUG macros, which make this not really a
standalone header even if the static functions were made inline instead)
llvm-svn: 324044
Example situation:
```
BB0:
%0 = ...
use %0
; ...
condjump BB1
jmp BB2
BB1:
%0 = ... ; rematerialized def from above (from earlier split step)
jmp BB2
BB2:
; ...
use %0
```
%0 will have a live interval with 3 value numbers (for the BB0, BB1 and
BB2 parts). Now SplitKit tries and succeeds in rematerializing the value
number in BB2 (This only works because it is a secondary split so
SplitKit is can trace this back to a single original def).
We need to recompute all live ranges affected by a value number that we
rematerialize. The case that we missed before is that when the value
that is rematerialized is at a join (Phi VNI) then we also have to
recompute liveness for the predecessor VNIs.
rdar://35699130
Differential Revision: https://reviews.llvm.org/D42667
llvm-svn: 324039
Summary:
This update now allows users to specify `--blame-context` and `--blame-context-all` to print source file blame information for the source of the blame.
Also updates the inline printing to correctly identify the top of the inlining stack for blame information.
Patch by Mitch Phillips!
Reviewers: vlad.tsyrklevich
Subscribers: llvm-commits, kcc, pcc
Differential Revision: https://reviews.llvm.org/D40111
llvm-svn: 324035
Every instruction that has the word TEST in its name seems to have been buried into EmitTest. But that code is largely concerned with trying to reuse the flags from instructions that update flags in a pretty normal way.
PTEST/TESTP/KTEST do not update flags in a normal way. They only update Z and C and the C flag update is non-standard. Rather than try to bend EmitTest's already complex logic to accomodate this, just move the call up to LowerSETCC and replicate the few pre-checks that are needed.
While there add a FIXME for using the C flag for checking for all 1s which we definitely couldn't do from EmitTEST.
llvm-svn: 324029
This is the enhancement suggested in D42536 to fix a shortcoming in
regular InstCombine's canEvaluate* functionality.
When we have multiple uses of a value, but they're all in one instruction, we can
allow that expression to be narrowed or widened for the same cost as a single-use
value.
AFAICT, this can only matter for multiply: sub/and/or/xor/select would be simplified
away if the operands are the same value; add becomes shl; shifts with a variable shift
amount aren't handled.
Differential Revision: https://reviews.llvm.org/D42739
llvm-svn: 324014
This is a rather non-controversial change. We were missing these instructions
from the list of instructions that are lane-sensitive. These two put the result
into lane 0 (BE) or 3 (LE) regardless of the input. This patch fixes PR36068.
llvm-svn: 324005
(I suppose these two pieces could be separated - but seemed related
enough)
As discussed on llvm-dev, this documents the general expectation of how
library layering should be handled. There are a few existing cases where
these constraints are not met, but as with most style guide things -
this is forward looking and provides guidance when cleaning up existing
code, it doesn't immediately require that all previous code be cleaned
up to match. (see: naming conventions, etc)
Differential Revision: https://reviews.llvm.org/D42771
llvm-svn: 324004
We were only checking the element count, but not the total width. This could cause illegal bitcasts to be created if for example the output was 512-bits, but N1 is 256 bits, and the extraction size was 128-bits.
Fixes PR36199
Differential Revision: https://reviews.llvm.org/D42809
llvm-svn: 324002
Until we support extending loads properly we're going to fall back for these.
We already handle stores in the same way, so this is just being consistent.
llvm-svn: 324001
Increment the field list member count for base classes and virtual base
classes.
Differential Revision: https://reviews.llvm.org/D41874
llvm-svn: 324000
This is a bit faster in theory, in practice it's cold code that's only
active in !NDEBUG, so it probably doesn't make a difference. This is one
of the last users of our homegrown Atomic.h.
llvm-svn: 323999
I added this comment with D42323, but as discussed in D42806, the architecture
does the right thing for denorms. We don't even need the select on 0.0 here?
llvm-svn: 323996
Summary:
D42698 adds child_edge_{begin|end} and children_edges to GraphTraits
which are used here. The reason for this change is to make it easy to
use count propagation on ModulesummaryIndex. As it stands,
CallGraphTraits is in Analysis while ModuleSummaryIndex is in IR.
Reviewers: davidxl, dberlin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42703
llvm-svn: 323994
Summary:
This change extends MachineCopyPropagation to do COPY source forwarding
and adds an additional run of the pass to the default pass pipeline just
after register allocation.
This version of this patch uses the newly added
MachineOperand::isRenamable bit to avoid forwarding registers is such a
way as to violate constraints that aren't captured in the
Machine IR (e.g. ABI or ISA constraints).
This change is a continuation of the work started in D30751.
Reviewers: qcolombet, javed.absar, MatzeB, jonpa, tstellar
Subscribers: tpr, mgorny, mcrosier, nhaehnle, nemanjai, jyknight, hfinkel, arsenm, inouehrs, eraman, sdardis, guyblank, fedor.sergeev, aheejin, dschuff, jfb, myatsina, llvm-commits
Differential Revision: https://reviews.llvm.org/D41835
llvm-svn: 323991
Summary:
This change is mostly adding comments to GraphTraits describing
interfaces to iterate over children edges of a node. These will
have to be implemented by specializations of GraphTraits. The
non-comment change is the addition of children_edges template
function that returns an iterator range.
The motivation for this is to use it in synthetic count propagation
algorithm and remove the CallGraphTraits class that provide similar
interfaces.
Reviewers: dberlin, davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42698
llvm-svn: 323990
This allows us to use PSHUFB for v8i16/v4i32 and VPERMD/PERMPS for v4i64/v4f64 variable shuffles.
Differential Revision: https://reviews.llvm.org/D42487
llvm-svn: 323987
Summary: Now that v2i1/v4i1 are legal without VLX. And v32i1 is legalized by splitting rather than widening. And isVectorLoadExtDesirable returns false for vXi1. It appears this handling is dead because the operations simply don't exist.
Reviewers: RKSimon, zvi, guyblank, delena, spatel
Reviewed By: delena
Subscribers: llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D42781
llvm-svn: 323983
Summary:
EmitTest sometimes creates X86ISD::AND specifically to hide the AND from DAG combine. But this prevents isel patterns that look for (cmp (and X, Y), 0) from being able to see it. So we end up with an AND and a TEST. The TEST gets removed by compare instruction optimization during the peephole pass.
This patch attempts to fix this by converting X86ISD::AND with no flag users back into ISD::AND during the DAG preprocessing just before isel.
In order to do this correctly I had to make the X86ISD::AND node created by EmitTest in this case really have a flag output. Which arguably it should have had anyway so that the number of operands would be consistent for the opcode in all cases. Then I had to modify the ReplaceAllUsesWith to understand that we might be looking at an instruction with 2 outputs. Though in this case there are no uses to replace since we just created the node, but that's what the code did before so I just made it keep working.
Reviewers: spatel, RKSimon, niravd, deadalnix
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42764
llvm-svn: 323982
As shown in the example in PR34994:
https://bugs.llvm.org/show_bug.cgi?id=34994
...we can return a very wrong answer (inf instead of 0.0) for square root when
using a reciprocal square root estimate instruction.
Here, I've conditionalized the filtering out of denorms based on the function
having "denormal-fp-math"="ieee" in its attributes. The other options for this
attribute are 'preserve-sign' and 'positive-zero'.
So we don't generate this extra code by default with just '-ffast-math' (because
then there's no denormal attribute string at all), but it works if you specify
'-ffast-math -fdenormal-fp-math=ieee' from clang.
As noted in the review, there may be other problems in clang that affect the
results depending on platform (Linux x86 at least), but this should allow
creating the desired codegen.
Differential Revision: https://reviews.llvm.org/D42323
llvm-svn: 323981
Summary:
In Instruction Selection UpdateChains replaces all matched Nodes'
chain references including interior token factors and deletes them.
This may allow nodes which depend on these interior nodes but are not
part of the set of matched nodes to be left with a dangling dependence.
Avoid this by doing the replacement for matched non-TokenFactor nodes.
Fixes PR36164.
Reviewers: jonpa, RKSimon, bogner
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D42754
llvm-svn: 323977
Commit r323512 introduced an optimisation in LowerReturn for half-precision
return values. A missing check caused a crash when the return value is "undef"
(i.e. a node that has no operands).
Differential Revision: https://reviews.llvm.org/D42743
llvm-svn: 323968
This patch includes EVA instructions in the Std2MicroMips mapping
tables, which is required for direct object emission.
Differential Revision: https://reviews.llvm.org/D41771
llvm-svn: 323958
This fixes bugzilla 33011
https://bugs.llvm.org/show_bug.cgi?id=33011
Defines bits {19-16} as zero or unpredictable as specified by the ARM ARM in
sections A8.8.116 and A8.8.117.
It fixes also the usage of PC register as destination register for MVN
register-shifted register version as specified in A8.8.117.
Differential Revision: https://reviews.llvm.org/D41905
llvm-svn: 323954
This, in instcombine, allows conversions to i8/i16/i32 (very
common cases) even if the resulting type is not legal according
to the data layout. This can often open up extra combine
opportunities.
Differential Revision: https://reviews.llvm.org/D42424
llvm-svn: 323951
Summary:
Before emitting code for scaled registers, we prevent
SCEVExpander from hoisting any scaled addressing mode
by emitting all the bases first. However, these bases
are being forced to the final type, resulting in some
odd code.
For example, if the type of the base is an integer and
the final type is a pointer, we will emit an inttoptr
for the base, a ptrtoint for the scale, and then a
'reverse' GEP where the GEP pointer is actually the base
integer and the index is the pointer. It's more intuitive
to use the pointer as a pointer and the integer as index.
Patch by: Bevin Hansson
Reviewers: atrick, qcolombet, sanjoy
Reviewed By: qcolombet
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42103
llvm-svn: 323946
Summary:
This change expands the amount of registers stashed by the entry and
`__xray_CustomEvent` trampolines.
We've found that since the `__xray_CustomEvent` trampoline calls can show up in
situations where the scratch registers are being used, and since we don't
typically want to affect the code-gen around the disabled
`__xray_customevent(...)` intrinsic calls, that we need to save and restore the
state of even the scratch registers in the handling of these custom events.
Reviewers: pcc, pelikan, dblaikie, eizan, kpw, echristo, chandlerc
Reviewed By: echristo
Subscribers: chandlerc, echristo, hiraditya, davide, dblaikie, llvm-commits
Differential Revision: https://reviews.llvm.org/D40894
llvm-svn: 323940
Fix the infinite loop reported in PR35809. It can occur with GCC-style
EH table assembly, where the compiler relies on the assembler to
calculate the offsets in the EH table.
Also see https://sourceware.org/bugzilla/show_bug.cgi?id=4029 for the
equivalent issue in the GNU assembler.
Patch by Ryan Prichard!
llvm-svn: 323934
For very, very large global initializers which can be statically evaluated, the
code would create vectors of temporary Constants, modifying them in place,
before committing the resulting Constant aggregate to the global's initializer
value. This had effectively O(n^2) complexity in the size of the global
initializer and would cause memory and non-termination issues compiling some
workloads.
This change performs the static initializer evaluation and creation in batches,
once for each global in the evaluated IR memory. The existing code is maintained
as a last resort when the initializers are more complex than simple values in a
large aggregate. This should theoretically by NFC, no test as the example case
is massive. The existing test cases pass with this, as well as the llvm test
suite.
To give an example, consider the following C++ code adapted from the clang
regression tests:
struct S {
int n = 10;
int m = 2 * n;
S(int a) : n(a) {}
};
template<typename T>
struct U {
T *r = &q;
T q = 42;
U *p = this;
};
U<S> e;
The global static constructor for 'e' will need to initialize 'r' and 'p' of
the outer struct, while also initializing the inner 'q' structs 'n' and 'm'
members. This batch algorithm will simply use general CommitValueTo() method
to handle the complex nested S struct initialization of 'q', before
processing the outermost members in a single batch. Using CommitValueTo() to
handle member in the outer struct is inefficient when the struct/array is
very large as we end up creating and destroy constant arrays for each
initialization.
For the above case, we expect the following IR to be generated:
%struct.U = type { %struct.S*, %struct.S, %struct.U* }
%struct.S = type { i32, i32 }
@e = global %struct.U { %struct.S* gep inbounds (%struct.U, %struct.U* @e,
i64 0, i32 1),
%struct.S { i32 42, i32 84 }, %struct.U* @e }
The %struct.S { i32 42, i32 84 } inner initializer is treated as a complex
constant expression, while the other two elements of @e are "simple".
Differential Revision: https://reviews.llvm.org/D42612
llvm-svn: 323933