specifies whether there is an output flag or not. Use this
instead of redundantly encoding the chain/flag results in the
output vtlist.
llvm-svn: 97419
even some the old isel didn't. There are several parts of
this that make me feel dirty, but it's no worse than the
old isel. I'll clean up the parts I can do without ripping
out the old one next.
llvm-svn: 97415
It gets its own implementation totally divorced from the (presumably
performance-sensitive) routines which parse into a uint64_t.
Add APInt::operator|=(uint64_t), which is situationally much better than
using a full APInt.
llvm-svn: 97381
payloads. APFloat's internal folding routines always make QNaNs now,
instead of sometimes making QNaNs and sometimes SNaNs depending on the
type.
llvm-svn: 97364
and restore the entire matcher stack by value. This is because children
we're testing could do moveparent or other things besides just
scribbling on additions to the stack.
llvm-svn: 97212
instead of to have a chained series of scope nodes. This makes
the generated table smaller, improves the efficiency of the
interpreter, and make the factoring optimization much more
reasonable to implement.
llvm-svn: 97160
terms of store and load, which means bitcasting between scalar
integer and vector has endian-specific results, which undermines
this whole approach.
llvm-svn: 97137
the number of value bits, not the number of bits of allocation for in-memory
storage.
Make getTypeStoreSize and getTypeAllocSize work consistently for arrays and
vectors.
Fix several places in CodeGen which compute offsets into in-memory vectors
to use TargetData information.
This fixes PR1784.
llvm-svn: 97064
the new isel: fold movechild+record+moveparent into a
single recordchild N node. This shrinks the X86 table
from 125443 to 117502 bytes.
llvm-svn: 97031
necessary to swap the operands to handle NaN and negative zero properly.
Also, reintroduce logic for checking for NaN conditions when forming
SSE min and max instructions, fixed to take into consideration NaNs and
negative zeros. This allows forming min and max instructions in more
cases.
llvm-svn: 97025
internal nodes with flag results. Record these with a new
OPC_MarkFlagResults opcode and use this to update the interior
nodes' flag results properly. This fixes CodeGen/X86/i256-add.ll
with the new isel.
llvm-svn: 97021
argument is non-null, pass it along to PHITranslateSubExpr so that it can
prefer using existing values that dominate the PredBB, instead of just
blindly picking the first equivalent value that it finds on a uselist.
Also when the DominatorTree is specified, have PHITranslateValue filter
out any result that does not dominate the PredBB. This is basically just
refactoring the check that used to be in GetAvailablePHITranslatedSubExpr
and also in GVN.
Despite my initial expectations, this change does not affect the results
of GVN for any testcases that I could find, but it should help compile time.
Before this change, if PHITranslateSubExpr picked a value that does not
dominate, PHITranslateWithInsertion would then insert a new value, which GVN
would later determine to be redundant and would replace. By picking a good
value to begin with, we save GVN the extra work of inserting and then
replacing a new value.
llvm-svn: 97010
Previously, LiveIntervalAnalysis would infer phi joins by looking for multiply
defined registers. That doesn't work if the phi join is implicitly defined in
all but one of the predecessors.
llvm-svn: 96994
The MicroBlaze is a highly configurable 32-bit soft-microprocessor for
use on Xilinx FPGAs. For more information see:
http://www.xilinx.com/tools/microblaze.htmhttp://en.wikipedia.org/wiki/MicroBlaze
The current LLVM MicroBlaze backend generates assembly which can be
compiled using the an appropriate binutils assembler.
llvm-svn: 96969
to be aligned with optimal nops. This patch does not change any functionality
and when the compiler is changed to use EmitCodeAlignment() it should also not
change the resulting output. Once the compiler change is made and everything
looks good the next patch with the table of optimal X86 nops will be added to
WriteNopData() changing the output. There are many FIXMEs in this patch which
will be removed when we have better target hooks (coming soon I hear).
llvm-svn: 96963
ridiculously ginormous patterns and need more than one byte
of displacement for encodings. This fixes CellSPU/fdiv.ll.
SPU is still doing something else ridiculous though.
llvm-svn: 96833
126.gcc nightly tests. These failures uncovered latent bugs that machine DCE
could remove one half of a stack adjust down/up pair, causing PEI to assert.
This update fixes that, and the tests now pass.
llvm-svn: 96822
result nodes correctly. Note that this includes a horrible hack
in DAGISelHeader which cannot be fixed reasonably without
eliminating (parallel) from input patterns. That, in turn,
can't be done until we support writing multiple result patterns
for the X86and_flag and related multiple-result nodes.
llvm-svn: 96767
<4 x i32> with <4 x float> values if they end up the same
register class. This gets us up to 231 passes on the ppc
tests (only 7 fails).
llvm-svn: 96750
the point where it is to the 95% feature complete mark, it just
needs result updating to be done (then testing, optimization
etc).
More specificallly, this adds support for chain and flag handling
on the result nodes, support for sdnodexforms, support for variadic
nodes, memrefs, pinned physreg inputs, and probably lots of other
stuff.
In the old DAGISelEmitter, this deletes the dead code related to
OperatorMap, cleans up a variety of dead stuff handling "implicit
remapping" from things like globaladdr -> targetglobaladdr (which
is no longer used because globaladdr always needs to be legalized),
and some minor formatting fixes.
llvm-svn: 96716
for ARM to just check if a function has a FP to determine if it's safe
to simplify the stack adjustment pseudo ops prior to eliminating frame
indices. Allow targets to override the default behavior and does so for ARM
and Thumb2.
llvm-svn: 96634
a loop exit value, so that if a loop gets deleted, ScalarEvolution
isn't stick holding on to dangling SCEVAddRecExprs for that loop. This
fixes PR6339.
llvm-svn: 96626
Moderate the weight given to very small intervals.
The spill weight given to new intervals created when spilling was not
normalized in the same way as the original spill weights calculated by
CalcSpillWeights. That meant that restored registers would tend to hang around
because they had a much higher spill weight that unspilled registers.
This improves the runtime of a few tests by up to 10%, and there are no
significant regressions.
llvm-svn: 96613
Also, have tools output -help-hidden rather than refer to --help-hidden,
for consistency, and likewise adjust documentation. This doesn't change
every mention of --help, only those which seemed clearly safe.
llvm-svn: 96578
CheckComplexPattern function. Though it is logically const,
I don't have the fortitude to clean up all the targets now,
and it not being const doesn't block anything.
llvm-svn: 96426
use and only call IsProfitableToFold/IsLegalToFold on the load
being folded, like the old dagiselemitter does. This
substantially simplifies the code and improves opportunities for
sharing.
llvm-svn: 96368
IsLegalToFold and IsProfitableToFold. The generic version of the later simply checks whether the folding candidate has a single use.
This allows the target isel routines more flexibility in deciding whether folding makes sense. The specific case we are interested in is folding constant pool loads with multiple uses.
llvm-svn: 96255
produce a table based matcher instead of gobs of C++ Code.
Though it's not done yet, the shrinkage seems promising,
the table for the X86 ISel is 75K and still has a lot of
optimization to come (compare to the ~1.5M of .o generated
the old way, much of which will go away).
The code is currently disabled by default (the #if 0 in
DAGISelEmitter.cpp). When enabled it generates a dead
SelectCode2 function in the DAGISel Header which will
eventually replace SelectCode.
There is still a lot of stuff left to do, which are
documented with a trail of FIXMEs.
llvm-svn: 96215
insert location has become an "inserted" instruction since the time
it was saved. If so, advance to the first non-"inserted" instruction.
llvm-svn: 96203
current insertion point, advance the current insertion point.
This avoids a use-before-def situation in a testcase extracted
from clang which is difficult to reduce to a reasonable-sized
regression test.
llvm-svn: 96151
created. This ensures it's updated at all time. It means targets which perform
dynamic stack alignment would know whether it is required and whether frame
pointer register cannot be made available register allocation.
This is a fix for rdar://7625239. Sorry, I can't create a reasonably sized test
case.
llvm-svn: 96069
bug fixes, and with improved heuristics for analyzing foreign-loop
addrecs.
This change also flattens IVUsers, eliminating the stride-oriented
groupings, which makes it easier to work with.
llvm-svn: 95975
reduce down to a single value. InstCombine already does this transformation
but DAG legalization may introduce new opportunities. This has turned out to
be important for ARM where 64-bit values are split up during type legalization:
InstCombine is not able to remove the PHI cycles on the 64-bit values but
the separate 32-bit values can be optimized. I measured the compile time
impact of this (running llc on 176.gcc) and it was not significant.
llvm-svn: 95951
lowering and requires that certain types exist in ValueTypes.h. Modified widening to
check if an op can trap and if so, the widening algorithm will apply only the op on
the defined elements. It is safer to do this in widening because the optimizer can't
guarantee removing unused ops in some cases.
llvm-svn: 95823
The major win of this is that the code is simpler and they
print on the same line as the instruction again:
movl %eax, 96(%esp) ## 4-byte Spill
movl 96(%esp), %eax ## 4-byte Reload
cmpl 92(%esp), %eax ## 4-byte Folded Reload
jl LBB7_86
llvm-svn: 95738
Enhance the x86 backend to show the hex values of immediates in
comments when they are large. For example:
movl $1072693248, 4(%esp) ## imm = 0x3FF00000
llvm-svn: 95728
into TargetOpcodes.h. #include the new TargetOpcodes.h
into MachineInstr. Add new inline accessors (like isPHI())
to MachineInstr, and start using them throughout the
codebase.
llvm-svn: 95687
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly. Move testcase
to new directory.
Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.
llvm-svn: 95628
Document that MCExpr::Div, Mod, and the comparison operators are all
signed operators.
Document that the comparison operators' results are target-dependent.
Document that the behavior of shr is target-dependent.
llvm-svn: 95619
cast needs to adjust for a vtable pointer when going from base to
derived type (when the base doesn't have a vtable but the
derived type does).
llvm-svn: 95585
are from debug info. Add an iterator to MachineRegisterInfo
to skip Debug operands when walking the use list. No
functional change yet.
llvm-svn: 95473
This time it's for real! I am going to hook this up in the frontends as well.
The inliner has some experimental heuristics for dealing with the inline hint.
When given a -respect-inlinehint option, functions marked with the inline
keyword are given a threshold just above the default for -O3.
We need some experiments to determine if that is the right thing to do.
llvm-svn: 95466
llc.cpp also defined these flags, meaning that when I linked all of LLVM's
libraries into a single shared library, llc crashed on startup with duplicate
flag definitions. This patch passes them through the EngineBuilder into
JIT::selectTarget().
llvm-svn: 95390
Instruction selection for X86 now can choose an instruction
sequence that will fit any address of any symbol, no matter
the pointer width. X86-64 uses a mov+call-via-reg sequence
for this.
llvm-svn: 95323
1-argument ExecutionEngine::create(Module*) ambiguous with the signature that
used to be ExecutionEngine::create(ModuleProvider*, defaulted_params). Fixed
by removing the 1-argument create(). Fixes PR6221.
llvm-svn: 95236
$ cat t.ll
@g = global i32 42
$ llc t.ll -o t.o -filetype=obj
$ nm t.o
00000000 D _g
There is still a ton of work left. Instructions are not being encoded
yet apparently.
llvm-svn: 95162
the one used by the JIT. Remove all forms of
addPassesToEmitFileFinish except the one used by the static
code generator. Inline the remaining version of
addPassesToEmitFileFinish into its only caller.
llvm-svn: 95109
cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.
Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.
Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.
And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.
llvm-svn: 94987
"visit*" method is called, take the newly created nodes, walk them in a DFS
fashion, and if they don't have an ordering set, then give it one.
llvm-svn: 94757
use plain SCEVUnknowns with ConstantExpr::getSizeOf and
ConstantExpr::getOffsetOf constants. This eliminates a bunch of
special-case code.
Also add code for pattern-matching these expressions, for clients that
want to recognize them.
Move ScalarEvolution's logic for expanding array and vector sizeof
expressions into an element count times the element size, to expose
the multiplication to subsequent folding, into the regular constant
folder.
llvm-svn: 94737
This allows code gen and the exception table writer to cooperate to make sure
landing pads are associated with the correct invoke locations.
llvm-svn: 94726
Move the X86 implementation of function body emission up to
AsmPrinter::EmitFunctionBody, which works by calling the virtual
EmitInstruction method.
llvm-svn: 94716
Modules and ModuleProviders. Because the "ModuleProvider" simply materializes
GlobalValues now, and doesn't provide modules, it's renamed to
"GVMaterializer". Code that used to need a ModuleProvider to materialize
Functions can now materialize the Functions directly. Functions no longer use a
magic linkage to record that they're materializable; they simply ask the
GVMaterializer.
Because the C ABI must never change, we can't remove LLVMModuleProviderRef or
the functions that refer to it. Instead, because Module now exposes the same
functionality ModuleProvider used to, we store a Module* in any
LLVMModuleProviderRef and translate in the wrapper methods. The bindings to
other languages still use the ModuleProvider concept. It would probably be
worth some time to update them to follow the C++ more closely, but I don't
intend to do it.
Fixes http://llvm.org/PR5737 and http://llvm.org/PR5735.
llvm-svn: 94686