A->isPredecessor(B) is the same as B->isSuccessor(A), but it can
tolerate a B that is null or dangling. This shouldn't happen normally,
but it it useful for verification code.
llvm-svn: 160968
Machine CSE and other optimizations can remove instructions so folding
is possible at peephole while not possible at ISel.
rdar://10554090 and rdar://11873276
llvm-svn: 160919
It is possible that an instruction can use and update EFLAGS.
When checking the safety, we should check the usage of EFLAGS first before
declaring it is safe to optimize due to the update.
llvm-svn: 160912
A value number is a PHI def if and only if it begins at a block
boundary. This can be derived from the def slot, a separate flag is not
necessary.
llvm-svn: 160893
This option replaces the existing live interval computation with one
based on LiveRangeCalc.cpp. The new algorithm does not depend on
LiveVariables, and it can be run at any time, before or after leaving
SSA form.
llvm-svn: 160892
This can happen as long as the instruction is not reachable. Instcombine does generate these unreachable malformed selects when doing RAUW
llvm-svn: 160874
The rationale here is that it's hard to write loops containing vector erases and
it only shows up if the vector contains non-trivial objects leading to crashes
when forming them out of garbage memory.
llvm-svn: 160854
These tables were indexed by [register][subreg index] which made them,
very large and sparse.
Replace them with lists of sub-register indexes that match the existing
lists of sub-registers. MCRI::getSubReg() becomes a very short linear
search, like getSubRegIndex() already was.
llvm-svn: 160843
Now that the weird X86 sub_ss and sub_sd sub-register indexes are gone,
there is no longer a need for the CompositeIndices construct in .td
files. Sub-register index composition can be specified on the
SubRegIndex itself using the ComposedOf field.
Also enforce unique names for sub-registers in TableGen. The same
sub-register cannot be available with multiple sub-register indexes.
llvm-svn: 160842
The (COPY_TO_REGCLASS GR32:$src, VR128) pattern looks odd, but
copyPhysReg does the right thing with it. (The old pattern would
eventually produce the same cross-class copy).
llvm-svn: 160830
The SUBREG_TO_REG instruction has magic semantics asserting that the
source value was defined by an instruction that cleared the high half of
the register. Those semantics are never actually exploited for xmm
registers.
llvm-svn: 160818
These idempotent sub-register indices don't do anything --- They simply
map XMM registers to themselves. They no longer affect register classes
either since the SubRegClasses field has been removed from Target.td.
This patch replaces XMM->XMM EXTRACT_SUBREG and INSERT_SUBREG patterns
with COPY_TO_REGCLASS patterns which simply become COPY instructions.
The number of IMPLICIT_DEF instructions before register allocation is
reduced, and that is the cause of the test case changes.
llvm-svn: 160816
This is still a work in progress.
Out-of-order CPUs usually execute instructions from multiple basic
blocks simultaneously, so it is necessary to look at longer traces when
estimating the performance effects of code transformations.
The MachineTraceMetrics analysis will pick a typical trace through a
given basic block and provide performance metrics for the trace. Metrics
will include:
- Instruction count through the trace.
- Issue count per functional unit.
- Critical path length, and per-instruction 'slack'.
These metrics can be used to determine the performance limiting factor
when executing the trace, and how it will be affected by a code
transformation.
Initially, this will be used by the early if-conversion pass.
llvm-svn: 160796
hopefully make it more visible. Adjust the web-docs to have a link to
this file rather than the list itself. I described code owners as also
being gatekeepers for their part of the code, which I think is true but
isn't in the code owner explanation on the web page.
llvm-svn: 160776
It is redundant; RegisterCoalescer will do the remat if it can't eliminate
the copy. Collected instruction counts before and after this. A few extra
instructions are generated due to spilling but it is normal to see these kinds
of changes with almost any small codegen change, according to Jakob.
This also fixed rdar://11830760 where xor is expected instead of movi0.
llvm-svn: 160749
When a live range splits into multiple connected components, we would
arbitrarily assign <undef> uses to component 0. This is wrong when the
use is tied to a def that gets assigned to a different component:
%vreg69<def> = ADD8ri %vreg68<undef>, 1
The use and def must get the same virtual register.
Fix this by assigning <undef> uses to the same component as the value
defined by the instruction, if any:
%vreg69<def> = ADD8ri %vreg69<undef>, 1
This fixes PR13402. The PR has a test case which I am not including
because it is unlikely to keep exposing this behavior in the future.
llvm-svn: 160739
of an array element (rather than at the beginning of the element) and extended
into the next element, then the load from the second element was being handled
wrong due to incorrect updating of the notion of which byte to load next. This
fixes PR13442. Thanks to Chris Smowton for reporting the problem, analyzing it
and providing a fix.
llvm-svn: 160711
The long branch pass (fixed in r160601) no longer uses the global base register
to compute addresses of branch destinations, so it is not necessary to reserve
a slot on the stack.
llvm-svn: 160703
struct s {
double x1;
float x2;
};
__attribute__((regparm(3))) struct s f(int a, int b, int c);
void g(void) {
f(41, 42, 43);
}
We need to be able to represent passing the address of s to f (sret) in a
register (inreg). Turns out that all that is needed is to not mark them as
mutually incompatible.
llvm-svn: 160695
if Condition Is Met instuctions that was not correctly determining the target
instruction.
So for a jne rel32 instruction:
% cat x.s
.byte 0x0f, 0x85, 0x09, 0x00, 0x00, 0x00
% as x.s
it was incorrectly deterining the target:
% otool -q -tv a.out
a.out:
(__TEXT,__text) section
0000000000000000 jne 0xd
and with the fix it gets this correct as:
% otool -q -tv a.out
a.out:
(__TEXT,__text) section
0000000000000000 jne 0xf
rdar://11505997
llvm-svn: 160694
are targeting an ELF platform. Only fold gs-relative (and fs-relative) loads
if it is actually sensible to do so for the target platform.
This fixes PR13438.
llvm-svn: 160687
might be deliberate "one time" leaks, so that leak checkers can find them.
This is a reapply of r160602 with the fix that this time I'm committing the
code I thought I was committing last time; the I->eraseFromParent() goes
*after* the break out of the loop.
llvm-svn: 160664
r160529 that was subsequently reverted. The fix was to not call
GV->eraseFromParent() right before the caller does the same. The existing
testcases already caught this bug if run under valgrind.
llvm-svn: 160602
This pass no longer requires that the global pointer value be saved to the
stack or register since it uses bal instruction to compute branch distance.
llvm-svn: 160601
LiveRangeEdit::foldAsLoad() can eliminate a register by folding a load
into its only use. Only do that when the load is safe to move, and it
won't extend any live ranges.
This fixes PR13414.
llvm-svn: 160575
CI's name, and then used the StringRef pointing at its old name. I'm
fixing it by storing the name in a std::string, and hoisting the
renaming logic to happen always. This is nicer anyways as it will allow
the upgraded IR to have the same names as the input IR in more cases.
Another bug found by AddressSanitizer. Woot.
llvm-svn: 160572
PHIElimination splits critical edges when it predicts it can resolve
interference and eliminate copies. It doesn't split the edge if the
interference wouldn't be resolved anyway because the phi-use register is
live in the critical edge anyway.
Teach PHIElimination to split loop exiting edges with interference, even
if it wouldn't resolve the interference. This removes the necessary
copies from the loop, which is still an improvement from injecting the
copies into the loop.
The test case demonstrates the improvement. Before:
LBB0_1:
cmpb $0, (%rdx)
leaq 1(%rdx), %rdx
movl %esi, %eax
je LBB0_1
After:
LBB0_1:
cmpb $0, (%rdx)
leaq 1(%rdx), %rdx
je LBB0_1
movl %esi, %eax
llvm-svn: 160571
GetBestDestForJumpOnUndef() assumes there is at least 1 successor, which isn't
true if the block ends in an indirect branch with no successors. Fix this by
bailing out earlier in this case.
llvm-svn: 160546
This fixes a bunch of make check failures of the form:
Unknown Architecture Version.
UNREACHABLE executed at ../lib/Target/Hexagon/HexagonSubtarget.cpp:60!
llvm-svn: 160518
It is optimal at least up to 7 bits (I've tested all such cases)
This change to truncate() allows a little simplification to the multiplication code,
and it also makes multiplication optimal :)
llvm-svn: 160512