This makes TargetRegisterClass slightly slower. Next step will be making contains faster.
Eventually TargetRegisterClass will be killed entirely.
llvm-svn: 135835
function on the TargetRegistry. Also clean it up and use the modern LLVM
utility libraries available instead of rolling a few things manually.
llvm-svn: 135756
register extra version information to be printed. This is designed to
allow those tools which link in various targets to also print those
registered targets under --version.
Currently this printing logic is embedded into the Support library
directly; a huge layering violation. This is the first step to hoisting
it out into the tools without adding lots of duplicated code.
llvm-svn: 135755
- Introduce JITDefault code model. This tells targets to set different default
code model for JIT. This eliminates the ugly hack in TargetMachine where
code model is changed after construction.
llvm-svn: 135580
(including compilation, assembly). Move relocation model Reloc::Model from
TargetMachine to MCCodeGenInfo so it's accessible even without TargetMachine.
llvm-svn: 135468
to MCRegisterInfo. Also initialize the mapping at construction time.
This patch eliminate TargetRegisterInfo from TargetAsmInfo. It's another step
towards fixing the layering violation.
llvm-svn: 135424
They mostly mirror the ArrayRef constructors, with two exceptions:
* There's no function mirroring the default constructor because it wouldn't have any parameters to deduce the right ArrayRef<T> from.
* There's an explicit SmallVector<T> overload in addition to the SmallVectorImpl<T> overload. Without it, the single-element overload would try to create an ArrayRef<Smallvector<T> > because it's a better match according to the overloading rules. (And both overloads are used in the current tree, so neither is redundant)
llvm-svn: 135389
related bug fixes and corresponding assertions for uninitialized data
and missing NULL check. Test cases will be included with the new LFTR.
llvm-svn: 135333
This gets rid of some of the gory splitting details in RAGreedy and
makes them available to future SplitKit clients.
Slightly generalize the functionality to support multi-way splitting.
Specifically, SplitEditor::splitLiveThroughBlock() supports switching
between different register intervals in a block.
llvm-svn: 135307
- The actual values are from the MCOI::OperandType enum.
- Teach tblgen to read it from the instruction definition.
- This is a better implementation of the hacks in edis.
llvm-svn: 135197
TargetAsmInfo, which in turn pulls in TargetRegisterInfo, etc. :-( There are
other cases of violations, but this is probably the worst.
This patch is but one small step towards fixing this. 500 more steps to go. :-(
llvm-svn: 135131
Update the debug output interface for MCParsedAsmOperand to have a print()
method which takes an output stream argument, an << operator which invokes
the print method using the given stream, and a dump() method which prints
the operand to the dbgs() stream. This makes the interface more consistent
with the rest of LLVM, and more convenient to use at the debugger command
line.
llvm-svn: 135043
ExtractValueInst APIs to use ArrayRef: a new constructor taking a
(begin, end) range, and operators == and != for element-wise comparison.
llvm-svn: 135039
an assert on Darwin llvm-gcc builds.
Assertion failed: (castIsValid(op, S, Ty) && "Invalid cast!"), function Create, file /Users/buildslave/zorg/buildbot/smooshlab/slave-0.8/build.llvm-gcc-i386-darwin9-RA/llvm.src/lib/VMCore/Instructions.cpp, li\
ne 2067.
etc.
http://smooshlab.apple.com:8013/builders/llvm-gcc-i386-darwin9-RA/builds/2354
--- Reverse-merging r134893 into '.':
U include/llvm/Target/TargetData.h
U include/llvm/DerivedTypes.h
U tools/bugpoint/ExtractFunction.cpp
U unittests/Support/TypeBuilderTest.cpp
U lib/Target/ARM/ARMGlobalMerge.cpp
U lib/Target/TargetData.cpp
U lib/VMCore/Constants.cpp
U lib/VMCore/Type.cpp
U lib/VMCore/Core.cpp
U lib/Transforms/Utils/CodeExtractor.cpp
U lib/Transforms/Instrumentation/ProfilingUtils.cpp
U lib/Transforms/IPO/DeadArgumentElimination.cpp
U lib/CodeGen/SjLjEHPrepare.cpp
--- Reverse-merging r134888 into '.':
G include/llvm/DerivedTypes.h
U include/llvm/Support/TypeBuilder.h
U include/llvm/Intrinsics.h
U unittests/Analysis/ScalarEvolutionTest.cpp
U unittests/ExecutionEngine/JIT/JITTest.cpp
U unittests/ExecutionEngine/JIT/JITMemoryManagerTest.cpp
U unittests/VMCore/PassManagerTest.cpp
G unittests/Support/TypeBuilderTest.cpp
U lib/Target/MBlaze/MBlazeIntrinsicInfo.cpp
U lib/Target/Blackfin/BlackfinIntrinsicInfo.cpp
U lib/VMCore/IRBuilder.cpp
G lib/VMCore/Type.cpp
U lib/VMCore/Function.cpp
G lib/VMCore/Core.cpp
U lib/VMCore/Module.cpp
U lib/AsmParser/LLParser.cpp
U lib/Transforms/Utils/CloneFunction.cpp
G lib/Transforms/Utils/CodeExtractor.cpp
U lib/Transforms/Utils/InlineFunction.cpp
U lib/Transforms/Instrumentation/GCOVProfiling.cpp
U lib/Transforms/Scalar/ObjCARC.cpp
U lib/Transforms/Scalar/SimplifyLibCalls.cpp
U lib/Transforms/Scalar/MemCpyOptimizer.cpp
G lib/Transforms/IPO/DeadArgumentElimination.cpp
U lib/Transforms/IPO/ArgumentPromotion.cpp
U lib/Transforms/InstCombine/InstCombineCompares.cpp
U lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
U lib/Transforms/InstCombine/InstCombineCalls.cpp
U lib/CodeGen/DwarfEHPrepare.cpp
U lib/CodeGen/IntrinsicLowering.cpp
U lib/Bitcode/Reader/BitcodeReader.cpp
llvm-svn: 134949
and MCSubtargetInfo.
- Added methods to update subtarget features (used when targets automatically
detect subtarget features or switch modes).
- Teach X86Subtarget to update MCSubtargetInfo features bits since the
MCSubtargetInfo layer can be shared with other modules.
- These fixes .code 16 / .code 32 support since mode switch is updated in
MCSubtargetInfo so MC code emitter can do the right thing.
llvm-svn: 134884
patch brings numerous advantages to LLVM. One way to look at it
is through diffstat:
109 files changed, 3005 insertions(+), 5906 deletions(-)
Removing almost 3K lines of code is a good thing. Other advantages
include:
1. Value::getType() is a simple load that can be CSE'd, not a mutating
union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
uniques them. This means that the compiler doesn't merge them structurally
which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead
"const Type *" everywhere.
Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.
"LLVM 3.0" is the right time to do this.
There are still some cleanups pending after this, this patch is large enough
as-is.
llvm-svn: 134829
CPU, and feature string. Parsing some asm directives can change
subtarget state (e.g. .code 16) and it must be reflected in other
modules (e.g. MCCodeEmitter). That is, the MCSubtargetInfo instance
must be shared.
llvm-svn: 134795
RAGreedy::tryAssign will now evict interference from the preferred
register even when another register is free.
To support this, add the EvictionCost struct that counts how many hints
are broken by an eviction. We don't want to break one hint just to
satisfy another.
Rename canEvict to shouldEvict, and add the first bit of eviction policy
that doesn't depend on spill weights: Always make room in the preferred
register as long as the evictees can be split and aren't already
assigned to their preferred register.
Also make the CSR avoidance more accurate. When looking for a cheaper
register it is OK to use a new volatile register. Only CSR aliases that
have never been used before should be avoided.
llvm-svn: 134735
This allows the (many) pseudo-instructions we have that map onto a single
real instruction to have their expansion during MC lowering handled
automatically instead of the current cumbersome manual expansion required.
These sorts of pseudos are common when an instruction is used in situations
that require different MachineInstr flags (isTerminator, isBranch, et. al.)
than the generic instruction description has. For example, using a move
to the PC to implement a branch.
llvm-svn: 134704
We have to do this in DAGBuilder instead of DAGCombiner, because the exact bit is lost after building.
struct foo { char x[24]; };
long bar(struct foo *a, struct foo *b) { return a-b; }
is now compiled into
movl 4(%esp), %eax
subl 8(%esp), %eax
sarl $3, %eax
imull $-1431655765, %eax, %eax
instead of
movl 4(%esp), %eax
subl 8(%esp), %eax
movl $715827883, %ecx
imull %ecx
movl %edx, %eax
shrl $31, %eax
sarl $2, %edx
addl %eax, %edx
movl %edx, %eax
llvm-svn: 134695
- Each target asm parser now creates its own MCSubtatgetInfo (if needed).
- Changed AssemblerPredicate to take subtarget features which tablegen uses
to generate asm matcher subtarget feature queries. e.g.
"ModeThumb,FeatureThumb2" is translated to
"(Bits & ModeThumb) != 0 && (Bits & FeatureThumb2) != 0".
llvm-svn: 134678
numbers should be printed instead of symbolic register names in
MCAsmStreamer::EmitRegisterName. This is necessary because some versions of
GNU assembler won't accept code in which symbolic register names are used in
cfi directives. There is no change in behavior unless the flag is explicitly
set to true by a backend.
llvm-svn: 134635
hasPredecessorHelper function allows predecessors to be cached to speed up
repeated invocations. This fixes PR10186.
X.isPredecessorOf(Y) now just calls Y.hasPredecessor(X)
Y.hasPredecessor(X) calls Y.hasPredecessorHelper(X, Visited, Worklist) with
empty Visited and Worklist sets (i.e. no caching over invocations).
Y.hasPredecessorHelper(X, Visited, Worklist) caches search state in Visited
and Worklist to speed up repeated calls. The Visited set is searched for X
before going to the worklist to further search the DAG if necessary.
llvm-svn: 134592
vec.insert(vec.begin(), vec[3]);
The issue was that vec[3] returns a reference into the vector, which is invalidated when insert() memmove's the elements down to make space. The method needs to specifically detect and handle this case to correctly match std::vector's semantics.
Thanks to Howard Hinnant for clarifying the correct behavior, and explaining how std::vector solves this problem.
llvm-svn: 134554
For now this is distinct from isCodeGenOnly, as code-gen-only
instructions can (and often do) still have encoding information
associated with them. Once we've migrated all of them over to true
pseudo-instructions that are lowered to real instructions prior to
the printer/emitter, we can remove isCodeGenOnly and just use isPseudo.
llvm-svn: 134539
Remove the assert that triggers if SuccIterator is constructed for a basic block
without a terminator instruction. Instead of triggering an assert a succ_end()
iterator is returned. This models a basic block with zero successors and allows
us to use F->viewCFG() on incompletely constructed functions.
llvm-svn: 134398
Add a MI->emitError() method that the backend can use to report errors
related to inline assembly. Call it from X86FloatingPoint.cpp when the
constraints are wrong.
This enables proper clang diagnostics from the backend:
$ clang -c pr30848.c
pr30848.c:5:12: error: Inline asm output regs must be last on the x87 stack
__asm__ ("" : "=u" (d)); /* { dg-error "output regs" } */
^
1 error generated.
llvm-svn: 134307
itineraries.
- Refactor TargetSubtarget to be based on MCSubtargetInfo.
- Change tablegen generated subtarget info to initialize MCSubtargetInfo
and hide more details from targets.
llvm-svn: 134257
be the first encoded as the first feature. It then uses the CPU name to look up
features / scheduling itineray even though clients know full well the CPU name
being used to query these properties.
The fix is to just have the clients explictly pass the CPU name!
llvm-svn: 134127
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.
llvm-svn: 134021
Both become <earlyclobber> defs on the INLINEASM MachineInstr, but we
now use two different asm operand kinds.
The new Kind_Clobber is treated identically to the old
Kind_RegDefEarlyClobber for now, but x87 floating point stack inline
assembly does care about the difference.
This will pop a register off the stack:
asm("fstp %st" : : "t"(x) : "st");
While this will pop the input and push an output:
asm("fst %st" : "=&t"(r) : "t"(x));
We need to know if ST0 was a clobber or an output operand, and we can't
depend on <dead> flags for that.
llvm-svn: 133902
Move the target-specific RecordRelocation logic out of the generic MC
MachObjectWriter and into the target-specific object writers. This allows
nuking quite a bit of target knowledge from the supposedly target-independent
bits in lib/MC.
llvm-svn: 133844
target machine from those that are only needed by codegen. The goal is to
sink the essential target description into MC layer so we can start building
MC based tools without needing to link in the entire codegen.
First step is to refactor TargetRegisterInfo. This patch added a base class
MCRegisterInfo which TargetRegisterInfo is derived from. Changed TableGen to
separate register description from the rest of the stuff.
llvm-svn: 133782
It has only one user. This eliminates the last include of
config.h from the public headers -- ideally, config.h
shouldn't even be installed by `make install` anymore.
llvm-svn: 133713
Replace it with llvm-config.h, which defines a subset of
config.h's macros "so that they can be in exported headers
and won't override package specific directives", e.g.,
PACKAGE_NAME.
Endian.h wasn't using any macros at all though, so just delete
the include there instead.
llvm-svn: 133712
"Reinstate r133435 and r133449 (reverted in r133499) now that the clang
self-hosted build failure has been fixed (r133512)."
Due to some additional warnings.
llvm-svn: 133700
If the linker supports it, this will hold the CIE and FDE information in a
compact format. The implementation of the compact unwinding emission is coming
soon.
llvm-svn: 133658
representing a constant reference to ValType. Normally this is just
"const ValType &", but when ValType is a std::vector we want to use
ArrayRef as the reference type.
llvm-svn: 133611
Change PHINodes to store simple pointers to their incoming basic blocks,
instead of full-blown Uses.
Note that this loses an optimization in SplitCriticalEdge(), because we
can no longer walk the use list of a BasicBlock to find phi nodes. See
the comment I removed starting "However, the foreach loop is slow for
blocks with lots of predecessors".
Extend replaceAllUsesWith() on a BasicBlock to also update any phi
nodes in the block's successors. This mimics what would have happened
when PHINodes were proper Users of their incoming blocks. (Note that
this only works if OldBB->replaceAllUsesWith(NewBB) is called when
OldBB still has a terminator instruction, so it still has some
successors.)
llvm-svn: 133435
I don't think the AugmentedUse struct buys us much, either in
correctness or in ease of use. Ditch it, and simplify Use::getUser() and
User::allocHungoffUses().
llvm-svn: 133433
all over the place in different styles and variants. Standardize on two
preferred entrypoints: one that takes a StructType and ArrayRef, and one that
takes StructType and varargs.
In cases where there isn't a struct type convenient, we now add a
ConstantStruct::getAnon method (whose name will make more sense after a few
more patches land).
It would be "really really nice" if the ConstantStruct::get and
ConstantVector::get methods didn't make temporary std::vectors.
llvm-svn: 133412
A RegisterTuples instance is used to synthesize super-registers by
zipping together lists of sub-registers. This is useful for generating
pseudo-registers representing register sequence constraints like 'two
consecutive GPRs', or 'an even-odd pair of floating point registers'.
The RegisterTuples def can be used in register set operations when
building register classes. That is the only way of accessing the
synthesized super-registers.
For example, the ARM QQ register class of pseudo-registers could have
been formed like this:
// Form pairs Q0_Q1, Q2_Q3, ...
def QQPairs : RegisterTuples<[qsub_0, qsub_1],
[(decimate QPR, 2),
(decimate (shl QPR, 1), 2)]>;
def QQ : RegisterClass<..., (add QQPairs)>;
Similarly, pseudo-registers representing '3 consecutive D-regs with
wraparound' look like:
// Form D0_D1_D2, D1_D2_D3, ..., D30_D31_D0, D31_D0_D1.
def DSeqTriples : RegisterTuples<[dsub_0, dsub_1, dsub_2],
[(rotl DPR, 0),
(rotl DPR, 1),
(rotl DPR, 2)]>;
TableGen automatically computes aliasing information for the synthesized
registers.
Register tuples are still somewhat experimental. We still need to see
how they interact with MC.
llvm-svn: 133407
Targets that need to change the default allocation order should use the
AltOrders mechanism instead. See the X86 and ARM targets for examples.
The allocation_order_begin() and allocation_order_end() methods have been
replaced with getRawAllocationOrder(), and there is further support
functions in RegisterClassInfo.
It is no longer possible to insert arbitrary code into generated
register classes. This is a feature.
llvm-svn: 133332
A register class can define AltOrders and AltOrderSelect instead of
defining method protos and bodies. The AltOrders lists can be defined
with set operations, and TableGen can verify that the alternative
allocation orders only contain valid registers.
This is currently an opt-in feature, and it is still possible to
override allocation_order_begin/end. That will not be true for long.
llvm-svn: 133320
The LSDA is a bit difficult for the non-initiated to read. Even with comments,
it's not always clear what's going on. This wraps the ASM streamer in a class
that retains the LSDA and then emits a human-readable description of what's
going on in it.
So instead of having to make sense of:
Lexception1:
.byte 255
.byte 155
.byte 168
.space 1
.byte 3
.byte 26
Lset0 = Ltmp7-Leh_func_begin1
.long Lset0
Lset1 = Ltmp812-Ltmp7
.long Lset1
Lset2 = Ltmp913-Leh_func_begin1
.long Lset2
.byte 3
Lset3 = Ltmp812-Leh_func_begin1
.long Lset3
Lset4 = Leh_func_end1-Ltmp812
.long Lset4
.long 0
.byte 0
.byte 1
.byte 0
.byte 2
.byte 125
.long __ZTIi@GOTPCREL+4
.long __ZTIPKc@GOTPCREL+4
you can read this instead:
## Exception Handling Table: Lexception1
## @LPStart Encoding: omit
## @TType Encoding: indirect pcrel sdata4
## @TType Base: 40 bytes
## @CallSite Encoding: udata4
## @Action Table Size: 26 bytes
## Action 1:
## A throw between Ltmp7 and Ltmp812 jumps to Ltmp913 on an exception.
## For type(s): __ZTIi@GOTPCREL+4 __ZTIPKc@GOTPCREL+4
## Action 2:
## A throw between Ltmp812 and Leh_func_end1 does not have a landing pad.
llvm-svn: 133286
Also switch the return type to ArrayRef<unsigned> which works out nicely
for ARM's implementation of this function because of the clever ArrayRef
constructors.
The name change indicates that the returned allocation order may contain
reserved registers as has been the case for a while.
llvm-svn: 133216
BranchProbabilityInfo (expect setEdgeWeight which is not available here).
Branch Weights are kept in MachineBasicBlocks. To turn off this analysis
set -use-mbpi=false.
llvm-svn: 133184
This is intended to support using REG_SEQUENCE SDNode's with type MVT::untyped, and is part of the long road to eliminating some of the hacks we currently use to support register pairs and other strange constraints, particularly on ARM NEON.
llvm-svn: 133178
This virtual function will replace allocation_order_begin/end as the one
to override when implementing custom allocation orders. It is simpler to
have one function return an ArrayRef than having two virtual functions
computing different ends of the same array.
Use getRawAllocationOrder() in place of allocation_order_begin() where
it makes sense, but leave some clients that look like they really want
the filtered allocation orders from RegisterClassInfo.
llvm-svn: 133170
This simplifies many of the target description files since it is common
for register classes to be related or contain sequences of numbered
registers.
I have verified that this doesn't change the files generated by TableGen
for ARM and X86. It alters the allocation order of MBlaze GPR and Mips
FGR32 registers, but I believe the change is benign.
llvm-svn: 133105
optimizations when emitting calls to the function; instead those calls may
use faster relocations which require the function to be immediately resolved
upon loading the dynamic object featuring the call. This is useful when it
is known that the function will be called frequently and pervasively and
therefore there is no merit in delaying binding of the function.
Currently only implemented for x86-64, where it turns into a call through
the global offset table.
Patch by Dan Gohman, who assures me that he's going to add LangRef documentation
for this once it's committed.
llvm-svn: 133080
Re-apply 133010, with fixes for inline assembler.
Original commit message:
"When an assembler local symbol is used but not defined in a module, a
Darwin assembler wants to issue a diagnostic to that effect."
Added fix to only perform the check when finalizing, as otherwise we're not
done and undefined symbols may simply not have been encountered yet.
Passes "make check" and a self-host check on Darwin.
llvm-svn: 133071
At the time I wrote this code (circa 2007), TargetRegisterInfo was using a std::set to perform these queries. Switching to the static hashtables was an obvious improvement, but in reality there's no reason to do anything other than scan.
With this change, total LLC time on a whole-program 403.gcc is reduced by approximately 1.5%, almost all of which comes from a 15% reduction in LiveVariables time. It also reduces the binary size of LLC by 86KB, thanks to eliminating a bunch of very large static tables.
llvm-svn: 133051
toString() now takes an optional bool argument that,
depending on the radix, adds the appropriate prefix
to the integer's string representation that makes it into a
meaningful C literal, e.g.:
hexademical: '-f' becomes '-0xf'
octal: '77' becomes '077'
binary: '110' becomes '0b110'
Patch by nobled@dreamwidth.org!
llvm-svn: 133032
When an assembler local symbol is used but not defined in a module, a
Darwin assembler wants to issue a diagnostic to that effect.
rdar://9559714
llvm-svn: 133010
Make the hash tables as small as possible while ensuring that all
lookups can be done in less than 8 probes.
Cut the aliases hash table in half by only storing a < b pairs - it
is a symmetric relation.
Use larger multipliers on the initial hash function to ensure that it
properly covers the whole table, and to resolve some clustering in the
very regular ARM register bank.
This reduces the size of most of these tables by 4x - 8x. For instance,
the ARM tables shrink from 48 KB to 8 KB.
llvm-svn: 132888
Besides moving structural computations to CodeGenRegisters.cpp, this
also well-defines the order of these lists:
- Sub-register lists come from a pre-order traversal of the graph
defined by the SubRegs lists in the .td files.
- Super-register lists are topologically ordered so no register comes
before any of its sub-registers. When the sub-register graph is not a
tree, independent super-registers appear in numerical order.
- Lists of overlapping registers are ordered according to register
number.
This reverses the order of the super-regs lists, but nobody was
depending on that. The previous order of the overlaps lists was odd, and
it may have depended on the precise behavior of std::stable_sort.
The old computations are still there, but will be removed shortly.
llvm-svn: 132881
Patch by: Jakub Staszak!
Introduces BranchProbability. Changes unsigned to uint32_t all over and
uint64_t only when overflow is expected.
llvm-svn: 132867