fixes: Use a separate register, instead of SP, as the
calling-convention resource, to avoid spurious conflicts with
actual uses of SP. Also, fix unscheduling of calling sequences,
which can be triggered by pseudo-two-address dependencies.
llvm-svn: 143206
Don't assume APInt::getRawData() would hold target-aware endianness nor host-compliant endianness. rawdata[0] holds most lower i64, even on big endian host.
FIXME: Add a testcase for big endian target.
FIXME: Ditto on CompileUnit::addConstantFPValue() ?
llvm-svn: 143194
it fixes the dragonegg self-host (it looks like gcc is miscompiled).
Original commit messages:
Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.
Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.
Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.
Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.
This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.
Delete #if 0 code accidentally left in.
llvm-svn: 143188
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.
Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.
Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.
Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.
This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.
llvm-svn: 143177
trying to legalize the operand types when only the result type
is required to be legalized - the type legalization machinery
will get round to the operands later if they need legalizing.
There can be a point to legalizing operands in parallel with
the result: when this saves compile time or results in better
code. There was only one case in which this was true: when
the operand is also split, so keep the logic for that bit.
As a result of this change, additional operand legalization
methods may need to be introduced to handle nodes where the
result and operand types can differ, like SIGN_EXTEND, but
the testsuite doesn't contain any tests where this is the case.
In any case, it seems better to require such methods (and die
with an assert if they doesn't exist) than to quietly produce
wrong code if we forgot to special case the node in
SplitVecRes_UnaryOp.
llvm-svn: 143026
This code makes different decisions when compiled into x87 instructions
because of different rounding behavior. That caused phase 2/3
miscompares on 32-bit Linux when the phase 1 compiler was built with gcc
(using x87), and the phase 2 compiler was built with clang (using SSE).
This fixes PR11200.
llvm-svn: 143006
An MBB which branches to an EH landing pad shouldn't be considered for tail merging.
In SjLj EH, the jump to the landing pad is not done explicitly through a branch
statement. The EH landing pad is added as a successor to the throwing
BB. Because of that however, the branch folding pass could mistakenly think that
it could merge the throwing BB with another BB. This isn't safe to do.
<rdar://problem/10334833>
llvm-svn: 143001
down to this commit. Original commit message:
An MBB which branches to an EH landing pad shouldn't be considered for tail merging.
In SjLj EH, the jump to the landing pad is not done explicitly through a branch
statement. The EH landing pad is added as a successor to the throwing
BB. Because of that however, the branch folding pass could mistakenly think that
it could merge the throwing BB with another BB. This isn't safe to do.
<rdar://problem/10334833>
llvm-svn: 142920
In SjLj EH, the jump to the landing pad is not done explicitly through a branch
statement. The EH landing pad is added as a successor to the throwing
BB. Because of that however, the branch folding pass could mistakenly think that
it could merge the throwing BB with another BB. This isn't safe to do.
<rdar://problem/10334833>
llvm-svn: 142891
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
The assumption in the back-end is that PHIs are not allowed at the start of the
landing pad block for SjLj exceptions.
<rdar://problem/10313708>
llvm-svn: 142689
ZExtPromotedInteger and SExtPromotedInteger based on the operation we legalize.
SetCC return type needs to be legalized via PromoteTargetBoolean.
llvm-svn: 142660
it's a bit more plausible to use this instead of CodePlacementOpt. The
code for this was shamelessly stolen from CodePlacementOpt, and then
trimmed down a bit. There doesn't seem to be much utility in returning
true/false from this pass as we may or may not have rewritten all of the
blocks. Also, the statistic of counting how many loops were aligned
doesn't seem terribly important so I removed it. If folks would like it
to be included, I'm happy to add it back.
This was probably the most egregious of the missing features, and now
I'm going to start gathering some performance numbers and looking at
specific loop structures that have different layout between the two.
Test is updated to include both basic loop alignment and nested loop
alignment.
llvm-svn: 142645
block frequency analyses. This differs substantially from the existing
block-placement pass in LLVM:
1) It operates on the Machine-IR in the CodeGen layer. This exposes much
more (and more precise) information and opportunities. Also, the
results are more stable due to fewer transforms ocurring after the
pass runs.
2) It uses the generalized probability and frequency analyses. These can
model static heuristics, code annotation derived heuristics as well
as eventual profile loading. By basing the optimization on the
analysis interface it can work from any (or a combination) of these
inputs.
3) It uses a more aggressive algorithm, both building chains from tho
bottom up to maximize benefit, and using an SCC-based walk to layout
chains of blocks in a profitable ordering without O(N^2) iterations
which the old pass involves.
The pass is currently gated behind a flag, and not enabled by default
because it still needs to grow some important features. Most notably, it
needs to support loop aligning and careful layout of loop structures
much as done by hand currently in CodePlacementOpt. Once it supports
these, and has sufficient testing and quality tuning, it should replace
both of these passes.
Thanks to Nick Lewycky and Richard Smith for help authoring & debugging
this, and to Jakob, Andy, Eric, Jim, and probably a few others I'm
forgetting for reviewing and answering all my questions. Writing
a backend pass is *sooo* much better now than it used to be. =D
llvm-svn: 142641
When checking the availability of instructions using the TLI, a 'promoted'
instruction IS available. It means that the value is bitcasted to another type
for which there is an operation. The correct check for the availablity of an
instruction is to check if it should be expanded.
llvm-svn: 142542
svn r139159 caused SelectionDAG::getConstant() to promote BUILD_VECTOR operands
with illegal types, even before type legalization. For this testcase, that led
to one BUILD_VECTOR with i16 operands and another with promoted i32 operands,
which triggered the assertion.
llvm-svn: 142370
.file filenumber "directory" "filename"
This removes one join+split of the directory+filename in MC internals. Because
bitcode files have independent fields for directory and filenames in debug info,
this patch may change the .o files written by existing .bc files.
llvm-svn: 142300
Use the custom inserter for the ARM setjmp intrinsics. Instead of creating the
SjLj dispatch table in IR, where it frequently violates serveral assumptions --
in particular assumptions made by the landingpad instruction about what can
branch to a landing pad and what cannot. Performing this in the back-end allows
us to violate these assumptions without the IR getting angry at us.
It also allows us to perform a small optimization. We can shove the address of
the dispatch's basic block into the function context and not have to add code
around the setjmp to check for the return value and jump to the dispatch.
Neat, huh?
<rdar://problem/10116753>
llvm-svn: 142294
Some code want to check that *any* call within a function has the 'returns
twice' attribute, not just that the current function has one.
llvm-svn: 142221
This isn't put into the 'clear()' method because the information needs to stick
around (at least for a little bit) after the selection DAG is built.
llvm-svn: 142032
When spilling around an instruction with a dead def, remember to add a
value number for the def.
The missing value number wouldn't normally create problems since there
would be an incoming live range as well. However, due to another bug
we could spill a dead V_SET0 instruction which doesn't read any values.
The missing value number caused an empty live range to be created which
is dangerous since it doesn't interfere with anything.
This fixes part of PR11125.
llvm-svn: 141923
Now that MI->getRegClassConstraint() can also handle inline assembly,
don't bail when recomputing the register class of a virtual register
used by inline asm.
This fixes PR11078.
llvm-svn: 141836
Most instructions have some requirements for their register operands.
Usually, this is expressed as register class constraints in the
MCInstrDesc, but for inline assembly the constraints are encoded in the
flag words.
llvm-svn: 141835
The inline asm operand constraint is initially encoded in the virtual
register for the operand, but that register class may change during
coalescing, and the original constraint is lost.
Encode the original register class as part of the flag word for each
inline asm operand. This makes it possible to recover the actual
constraint required by inline asm, just like we can for normal
instructions.
llvm-svn: 141833
our current machine instruction defines a register with the same register class
as what's being replaced. This showed up in the SPEC 403.gcc benchmark, where it
would ICE because a tail call was expecting one register class but was given
another. (The machine instruction verifier catches this situation.)
<rdar://problem/10270968>
llvm-svn: 141830
rather than the previous index. If a block has a single instruction, the
previous index may be in a different basic block.
I have no clue how this used to work on all of test-suite, because now this
failure is seen quite often when trying to compile code with -strong-phi-elim.
This fixes PR10252.
llvm-svn: 141812
containing loop's header to see if that's a landing pad. If it is, then we don't
want to hoist instructions out of the loop and above the header.
llvm-svn: 141767
1. The speculation check may not have been performed if the BB hasn't had a load
LICM candidate.
2. If the candidate would be CSE'ed, then go ahead and speculatively LICM the
instruction even if it's in high register pressure situation.
llvm-svn: 141747
file. Since it should only be used when necessary propagate it through
the backend code generation and tweak testcases accordingly.
This helps with code like in clang's test/CodeGen/debug-info-line.c where
we have multiple #line directives within a single lexical block and want
to generate only a single block that contains each file change.
Part of rdar://10246360
llvm-svn: 141729
The blocks with invokes have branches to the dispatch block, because that more
correctly models the behavior of the CFG. The dispatch of course has edges to
the landing pads. Those landing pads could contain invokes, which then have
branches back to the dispatch. This creates a loop. The machine LICM pass looks
at this loop and thinks it can hoist elements out of it. But because the
dispatch is an alternate entry point into the program, the hoisted instructions
won't be executed.
I wasn't able to get a testcase which was small and could reproduce all of the
time. The function_try_block.cpp in llvm-test was where this showed up.
llvm-svn: 141726
Allow targets to expand COPY and other standard pseudo-instructions
before they are expanded with copyPhysReg().
This allows the target to examine the COPY instruction for extra
operands indicating it can be widened to a preferable super-register
copy. See the ARM -widen-vmovs option.
llvm-svn: 141578
PhysReg operands are not allowed to have sub-register indices at all.
For virtual registers with sub-reg indices, check that all registers in
the register class support the sub-reg index.
llvm-svn: 141220
EXTRACT_SUBREG is emitted as %dst = COPY %src:sub, so there is no need to
constrain the %dst register class. RegisterCoalescer will apply the
necessary constraints if it decides to eliminate the COPY.
The %src register class does need to be constrained to something with
the right sub-registers, though. This is currently done manually with
COPY_TO_REGCLASS nodes. They can possibly be removed after this patch.
llvm-svn: 141207
The register class created by INSERT_SUBREG and SUBREG_TO_REG must be
legal and support the SubIdx sub-registers.
The new getSubClassWithSubReg() hook can compute that.
This may create INSERT_SUBREG instructions defining a larger register
class than the sub-register being inserted. That is OK,
RegisterCoalescer will constrain the register class as needed when it
eliminates the INSERT_SUBREG instructions.
llvm-svn: 141198
TwoAddressInstructionPass should annotate instructions with <undef>
flags when it lower REG_SEQUENCE instructions. LiveIntervals should not
be in the business of modifying code (except for kill flags, perhaps).
llvm-svn: 141187
For example:
%vreg10:dsub_0<def,undef> = COPY %vreg1
%vreg10:dsub_1<def> = COPY %vreg2
is rewritten as:
%D2<def> = COPY %D0, %Q1<imp-def>
%D3<def> = COPY %D1, %Q1<imp-use,kill>, %Q1<imp-def>
The first COPY doesn't care about the previous value of %Q1, so it
doesn't read that register.
The second COPY is a partial redefinition of %Q1, so it implicitly kills
and redefines that register.
This makes it possible to recognize instructions that can harmlessly
clobber the full super-register. The write and don't read the
super-register.
llvm-svn: 141139
RegisterCoalescer can create sub-register defs when it is joining a
register with a sub-register. Add <undef> flags to these new
sub-register defs where appropriate.
llvm-svn: 141138
The <undef> flag says that a MachineOperand doesn't read its register,
or doesn't depend on the previous value of its register.
A full register def never depends on the previous register value. A
partial register def may depend on the previous value if it is intended
to update part of a register.
For example:
%vreg10:dsub_0<def,undef> = COPY %vreg1
%vreg10:dsub_1<def> = COPY %vreg2
The first copy instruction defines the full %vreg10 register with the
bits not covered by dsub_0 defined as <undef>. It is not considered a
read of %vreg10.
The second copy modifies part of %vreg10 while preserving the rest. It
has an implicit read of %vreg10.
This patch adds a MachineOperand::readsReg() method to determine if an
operand reads its register.
Previously, this was modelled by adding a full-register <imp-def>
operand to the instruction. This approach makes it possible to
determine directly from a MachineOperand if it reads its register. No
scanning of MI operands is required.
llvm-svn: 141124
and the alignment is 0 (i.e., it's defined globally in one file and declared in
another file) it could get an alignment which is larger than the ABI allows for
that type, resulting in aligned moves being used for unaligned loads.
For instance, in file A.c:
struct S s;
In file B.c:
struct {
// something long
};
extern S s;
void foo() {
struct S p = s;
// ...
}
this copy is a 'memcpy' which is turned into a series of 'movaps' instructions
on X86. But this is wrong, because 'struct S' has alignment of 4, not 16.
llvm-svn: 140902
This helps with porting code from 2.9 to 3.0 as TargetSelect.h changed location,
and if you include the old one by accident you will trigger this assert.
llvm-svn: 140848
The function needs to scan the implicit operands anyway, so no
performance is won by caching the number of implicit operands added to
an instruction.
This also fixes a bug when adding operands after an implicit operand has
been added manually. The NumImplicitOps count wasn't kept up to date.
MachineInstr::addOperand() will now consistently place all explicit
operands before all the implicit operands, regardless of the order they
are added. It is possible to change an MI opcode and add additional
explicit operands. They will be inserted before any existing implicit
operands.
The only exception is inline asm instructions where operands are never
reordered. This is because of a hack that marks explicit clobber regs
on inline asm as <implicit-def> to please the fast register allocator.
This hack can go away when InstrEmitter and FastIsel can add exact
<dead> flags to physreg defs.
llvm-svn: 140744
Upon further review, most of the EH code should remain written at the IR
level. The part which breaks SSA form is the dispatch table, so that part will
be moved to the back-end.
llvm-svn: 140730
This intrinsic is used to pass the index of the function context to the back-end
for further processing. The back-end is in charge of filling in the rest of the
entries.
llvm-svn: 140676
The DWARF exception pass uses the call site information, which is set up here. A
pre-RA pass is too late for it to use this information. So create and setup the
function context here, and then insert the call site values here (and map the
call sites for the DWARF EH pass). This is simpler than the original pass, and
doesn't make the CFG lose its SSA-ness.
It's a win-win-win-win-lose-win-win situation.
llvm-svn: 140675
current IR-level pass.
The old SjLj EH pass has some problems, especially with the new EH model. Most
significantly, it violates some of the new restrictions the new model has. For
instance, the 'dispatch' table wants to jump to the landing pad, but we cannot
allow that because only an invoke's unwind edge can jump to a landing pad. This
requires us to mangle the code something awful. In addition, we need to keep the
now dead landingpad instructions around instead of CSE'ing them because the
DWARF emitter uses that information (they are dead because no control flow edge
will execute them - the control flow edge from an invoke's unwind is superceded
by the edge coming from the dispatch).
Basically, this pass belongs not at the IR level where SSA is king, but at the
code-gen level, where we have more flexibility.
llvm-svn: 140646
Many targets use pseudo instructions to help register allocation. Like
the COPY instruction, these pseudos can be expanded after register
allocation. The early expansion can make life easier for PEI and the
post-ra scheduler.
This patch adds a hook that is called for all remaining pseudo
instructions from the ExpandPostRAPseudos pass.
llvm-svn: 140472