floating point equality comparisons into integer ones with -ffast-math. The
issue is the optimization causes +0.0 != -0.0.
Now the optimization is only done when one side is known to be 0.0. The other
side's sign bit is masked off for the comparison.
rdar://10964603
llvm-svn: 151861
to do more invasive refactoring here to get FoldingSet to use size_t or
even hash_code directly, but for now this is a good first step to remove
Yet Another Hashing Algorithm from LLVM.
llvm-svn: 151859
This function could have r12 live across a function call when compiling
thumb1 code.
The test case for this is not included because it is very long. It must
provoke emergency spilling near a function call. The behavior is
provoked by MultiSource/Applications/JM/lencod, and it triggers an
assertion in the scavenger.
<rdar://problem/10963642>
llvm-svn: 151855
fixups that are being used to determine section offsets. Reduces
the total number of fixups by 50% for a non-trivial testcase.
Part of rdar://10413936
llvm-svn: 151852
and stores was added.
- SelectAddr should return false if Parent is an unaligned f32 load or store.
- Only aligned load and store nodes should be matched to select reg+imm
floating point instructions.
- MIPS does not have support for f64 unaligned load or store instructions.
llvm-svn: 151843
so that the test will not fail when run on an Intel Atom
processor, due to the Atom scheduler producing an instruction sequence that is
different from that which is normally expected.
llvm-svn: 151832
of the proposed standard hashing interfaces (N3333), and to use
a modified and tuned version of the CityHash algorithm.
Some of the highlights of this change:
-- Significantly higher quality hashing algorithm with very well
distributed results, and extremely few collisions. Should be close to
a checksum for up to 64-bit keys. Very little clustering or clumping of
hash codes, to better distribute load on probed hash tables.
-- Built-in support for reserved values.
-- Simplified API that composes cleanly with other C++ idioms and APIs.
-- Better scaling performance as keys grow. This is the fastest
algorithm I've found and measured for moderately sized keys (such as
show up in some of the uniquing and folding use cases)
-- Support for enabling per-execution seeds to prevent table ordering
or other artifacts of hashing algorithms to impact the output of
LLVM. The seeding would make each run different and highlight these
problems during bootstrap.
This implementation was tested extensively using the SMHasher test
suite, and pased with flying colors, doing better than the original
CityHash algorithm even.
I've included a unittest, although it is somewhat minimal at the moment.
I've also added (or refactored into the proper location) type traits
necessary to implement this, and converted users of GeneralHash over.
My only immediate concerns with this implementation is the performance
of hashing small keys. I've already started working to improve this, and
will continue to do so. Currently, the only algorithms faster produce
lower quality results, but it is likely there is a better compromise
than the current one.
Many thanks to Jeffrey Yasskin who did most of the work on the N3333
paper, pair-programmed some of this code, and reviewed much of it. Many
thanks also go to Geoff Pike Pike and Jyrki Alakuijala, the original
authors of CityHash on which this is heavily based, and Austin Appleby
who created MurmurHash and the SMHasher test suite.
Also thanks to Nadav, Tobias, Howard, Jay, Nick, Ahmed, and Duncan for
all of the review comments! If there are further comments or concerns,
please let me know and I'll jump on 'em.
llvm-svn: 151822
Allows us to de-virtualize the function and provides access to it in
the instruction printer, which is useful for handling composite
physical registers (e.g., ARM register lists).
llvm-svn: 151815
This reverts commit 151760.
We want to move getSubReg() from TargetRegisterInfo into MCRegisterInfo,
but to do that, the type of the lookup table needs to be the same for
all targets.
llvm-svn: 151814
This allows us to make TRC non-polymorphic and value-initializable, eliminating a huge static
initializer and a ton of cruft from the generated code.
Shrinks ARMBaseRegisterInfo.o by ~100k.
llvm-svn: 151806
Simply treat bundles as instructions. Spill code is inserted between
bundles, never inside a bundle. Rewrite all operands in a bundle at
once.
Don't attempt and memory operand folding inside bundles.
llvm-svn: 151787
* Add begin_dynamic_table() / end_dynamic_table() private interface to ELFObjectFile.
* Add begin_libraries_needed() / end_libraries_needed() interface to ObjectFile, for grabbing the list of needed libraries for a shared object or dynamic executable.
* Implement this new interface completely for ELF, leave stubs for COFF and MachO.
* Add 'llvm-readobj' tool for dumping ObjectFile information.
llvm-svn: 151785
This allows the function to be inlined, and makes it suitable for use in
getInstructionIndex().
Also provide a const version. C++ is great for touch typing practice.
llvm-svn: 151782
- The search bounds are constant, in the worst case (ARM target) it will scan over 30 uint16_ts.
- This method isn't very hot, I had problems finding a testcase where it's called more than a dozen of times (no perf impact).
llvm-svn: 151773
Instead of nested switch statements, use a lookup table. On ARM, this replaces
a 23k (x86_64 release build) function with a 16k table. Its not unlikely to
be faster, as well.
llvm-svn: 151751
std::vector.
- Good for 1-2% speedup on writing PCH for Cocoa.h.
- Clang side API match to follow shortly, there wasn't an easy way to make this
non-breaking.
llvm-svn: 151750
Rename ST_External to ST_Unknown, and slightly change its semantics. It now only indicates that the symbol's type
is unknown, not that the symbol is undefined. (For that, use ST_Undefined).
llvm-svn: 151696
This function does more or less the same as
MI::readsWritesVirtualRegister(), but it supports bundles as well.
It also determines if any constraint requires reading and writing
operands to use the same register. Most clients want to know.
Use the more modern MO.readsReg() instead of trying to sort out undefs
and partial redefines. Stop supporting the extra full <imp-def> operand
as an alternative to <def,undef> sub-register defines.
llvm-svn: 151690
Extract a base class and provide four specific sub-classes for iterating
over const/non-const bundles/instructions.
This eliminates the mystery bool constructor argument.
llvm-svn: 151684
find root names on Unix.
- This fixes make_absolute to not basically always call current_path() on
Unix systems.
- I think the API probably needs cleanup in this area, but I'll let Michael
handle that.
llvm-svn: 151681
Without this hook, functions w/ a completely empty body (including no
epilogue) will cause an MCEmitter assertion failure.
For example,
define internal fastcc void @empty_function() {
unreachable
}
rdar://10947471
llvm-svn: 151673
SlotIndexes are not assigned to instructions inside bundles, but it is
still valid to look up the index of those instructions.
The reverse getInstructionFromIndex() will return the first instruction
in the bundle.
llvm-svn: 151672
methods are no longer needed now that LinearScan has gone away.
(Contains tweaks trivialSpillEverywhere to enable the removal of getNewVRegs).
llvm-svn: 151658
debug info for assembly files. We were already doing the right thing when
producing debug info for C/C++.
ELF linkers don't know dwarf, so they depend on these relocations to produce
valid dwarf output.
llvm-svn: 151655
Clang builds. The detection logic for compilers that support the warning
isn't working. Rafael is going to investigate it, but didn't want people
to have to wade through build spam until then.
llvm-svn: 151649
To avoid problems with zero shifts when getting the bits that move between words
we use a trick: first shift the by amount-1, then do another shift by one. When
amount is 0 (and size 32) we first shift by 31, then by one, instead of by 32.
Also fix a latent bug that emitted the low and high words in the wrong order
when shifting right.
Fixes PR12113.
llvm-svn: 151637
When the GEP index is a vector of pointers, the code that calculated the size
of the element started from the vector type, and not the contained pointer type.
As a result, instead of looking at the data element pointed by the vector, this
code used the size of the vector. This works for 32bit members (on 32bit
systems), but not for other types. Added code to peel the vector type and
added a test.
llvm-svn: 151626
the processor keeps a return addresses stack (RAS) which stores the address
and the instruction execution state of the instruction after a function-call
type branch instruction.
Calling a "noreturn" function with normal call instructions (e.g. bl) can
corrupt RAS and causes 100% return misprediction so LLVM should use a
unconditional branch instead. i.e.
mov lr, pc
b _foo
The "mov lr, pc" is issued in order to get proper backtrace.
rdar://8979299
llvm-svn: 151623
When an outgoing call takes more than 2k of arguments on the stack, we
don't allocate that call frame in the prolog, but adjust the stack
pointer immediately before the call instead.
This causes problems with the emergency spill slot because PEI can't
track stack pointer adjustments on the second pass, and if the outgoing
arguments are too big, SP can't be used to reach the emergency spill
slot at all.
Work around these problems by ensuring there is a base or frame pointer
that can be used to access the emergency spill slot.
<rdar://problem/10917166>
llvm-svn: 151604
We on the linker to resolve calls to the appropriate BL/BLX instruction
to make interworking function correctly. It uses the symbol in the
relocation to do that, so we need to be careful about being too clever.
To enable this for ARM mode, split the BL/BLX fixup kind off from the
unconditional-branch fixups.
rdar://10927209
llvm-svn: 151571
After the SlotIndex slot names were updated, it is possible to apply
stricter checks to live intervals.
Also treat bundles as bags of operands when checking live intervals.
llvm-svn: 151531
The MIOperands iterator can visit operands on a single instruction, or
all operands in a bundle. This simplifies code like the register
allocator that treats bundles as a set of operands.
llvm-svn: 151529
value numbers to be assigned when calculating any particular value number.
Enhance the logic that detects new value numbers to take this into account,
for a tiny compile time speedup. Fix a comment typo while there.
llvm-svn: 151522
%cmp (eg: A==B) we already replace %cmp with "true" under the true edge, and
with "false" under the false edge. This change enhances this to replace the
negated compare (A!=B) with "false" under the true edge and "true" under the
false edge. Reported to improve perlbench results by 1%.
llvm-svn: 151517
verifier does. This correctly handles invoke.
Thanks to Duncan, Andrew and Chris for the comments.
Thanks to Joerg for the early testing.
llvm-svn: 151469
Reverting this because it breaks static linking on ppc64. Specifically, it may be linkonce_odr functions that are the problem.
With this patch, if you link statically, calls to some functions end up calling their descriptor addresses instead
of calling to their entry points. This causes the execution to fail with SIGILL (b/c the descriptor address just
has some pointers, not code).
llvm-svn: 151433
[Joe Groff] Hi everyone. My previous patch applied as r151382 had a few problems:
Clang raised a warning, and X86 LowerOperation would assert out for
fptoui f64 to i32 because it improperly lowered to an illegal
BUILD_PAIR. Here's a patch that addresses these issues. Let me know if
any other changes are necessary. Thanks.
llvm-svn: 151432
are optimization hints, but at -O0 we're not optimizing. This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594
llvm-svn: 151429
uses of the vreg, since the old kills may no longer be valid. This was causing
-verify-machineinstrs to complain about uses after kills, and could potentially
have been causing subtle register allocation issues, but I haven't come across a
test case yet.
llvm-svn: 151425
reserving a physical register ($gp or $28) for that purpose.
This will completely eliminate loads that restore the value of $gp after every
function call, if the register allocator assigns a callee-saved register, or
eliminate unnecessary loads if it assigns a temporary register.
example:
.cpload $25 // set $gp.
...
.cprestore 16 // store $gp to stack slot 16($sp).
...
jalr $25 // function call. clobbers $gp.
lw $gp, 16($sp) // not emitted if callee-saved reg is chosen.
...
lw $2, 4($gp)
...
jalr $25 // function call.
lw $gp, 16($sp) // not emitted if $gp is not live after this instruction.
...
llvm-svn: 151402
Add support for a missed case when the symbols in a difference
expression are in the same section but not the same fragment.
rdar://10924681
llvm-svn: 151345
variable declaration as an argument because we want that address
anyhow for our debug information.
This seems to fix rdar://9965111, at least we have more debug
information than before and from reading the assembly it appears
to be the correct location.
llvm-svn: 151335
I'll let the buildbots determine the compile time improvements from this
change, but 464.h264ref has 5% faster codegen at -O2.
This patch does cause some assembly changes. Branch folding can make
different decisions about calls with dead return values.
CriticalAntiDepBreaker may choose different registers because its
liveness tracking is affected. MachineCopyPropagation may sometimes
leave a dead copy behind.
llvm-svn: 151331
The tied source operand of tMUL is the second source operand, not the
first like every other two-address thumb instruction. Special case it
in the size reduction pass to make sure we create the tMUL instruction
properly.
llvm-svn: 151315
In historical reason, Interpreter's external entries had prefix "lle_X_" as C linkage, even for well-known entries in EE/Interpreter.
Now, at least on ToT, they are resolved via FuncNames[] mapper.
We will not need their symbols are expected to be exported any more.
Clang r150128 has introduced the warning <"%0 has C-linkage specified, but returns user-defined type %1 which is incompatible with C">.
llvm-svn: 151312
Assuming that a single std::set node adds 3 control words, a bitvector
can store (3*8+4)*8=224 registers in the allocated memory of a single
element in the std::set (x86_64). Also we don't have to call malloc
for every register added.
llvm-svn: 151269
rdar://10873652
As part of this I updated the llvm-mc disassembler C API to always call the
SymbolLookUp call back even if there is no getOpInfo call back. If there is a
getOpInfo call back that is tried first and then if that gets no information
then the SymbolLookUp is called. I also made the code more robust by
memset(3)'ing to zero the LLVMOpInfo1 struct before then setting
SymbolicOp.Value before for the call to getOpInfo. And also don't use any
values from the LLVMOpInfo1 struct if getOpInfo returns 0. And also don't
use any of the ReferenceType or ReferenceName values from SymbolLookUp if it
returns NULL. rdar://10873563 and rdar://10873683
For the X86 target also fixed bugs so the annotations get printed.
Also fixed a few places in the ARM target that was not producing symbolic
operands for some instructions. rdar://10878166
llvm-svn: 151267
Before register allocation, instructions can be moved across calls in
order to reduce register pressure. After register allocation, we don't
gain a lot by moving callee-saved defs across calls. In fact, since the
scheduler doesn't have a good idea how registers are used in the callee,
it can't really make good scheduling decisions.
This changes the schedule in two ways: 1. Latencies to call uses and
defs are no longer accounted for, causing some random shuffling around
calls. This isn't really a problem since those uses and defs are
inaccurate proxies for what happens inside the callee. They don't
represent registers used by the call instruction itself.
2. Instructions are no longer moved across calls. This didn't happen
very often, and the scheduling decision was made on dubious information
anyway.
As with any scheduling change, benchmark numbers shift around a bit,
but there is no positive or negative trend from this change.
This makes the post-ra scheduler 5% faster for ARM targets.
The secret motivation for this patch is the introduction of register
mask operands representing call clobbers. The most efficient way of
handling regmasks in ScheduleDAGInstrs is to model them as barriers for
physreg live ranges, but not for virtreg live ranges. That's fine
pre-ra, but post-ra it would have the same effect as this patch.
llvm-svn: 151265
it with memcpy. This also fixes a problem on big-endian hosts, where
addUnaligned would return different results depending on the alignment
of the data.
llvm-svn: 151247
value is zero. Instead of a cmov + op, issue an conditional op instead. e.g.
cmp r9, r4
mov r4, #0
moveq r4, #1
orr lr, lr, r4
should be:
cmp r9, r4
orreq lr, lr, #1
That is, optimize (or x, (cmov 0, y, cond)) to (or.cond x, y). Similarly extend
this to xor as well as (and x, (cmov -1, y, cond)) => (and.cond x, y).
It's possible to extend this to ADD and SUB but I don't think they are common.
rdar://8659097
llvm-svn: 151224
The standard function epilog includes a .size directive, but ppc64 uses
an alternate local symbol to tag the actual start of each function.
Until recently, binutils accepted the .size directive as:
.size test1, .Ltmp0-test1
however, using this directive with recent binutils will result in the error:
.size expression for XXX does not evaluate to a constant
so we must use the label which actually tags the start of the function.
llvm-svn: 151200
Add some data structures to represent for loops. These will be
referenced during object processing to do any needed iteration and
instantiation.
Add foreach keyword support to the lexer.
Add a mode to indicate that we're parsing a foreach loop. This allows
the value parser to early-out when processing the foreach value list.
Add a routine to parse foreach iteration declarations. This is
separate from ParseDeclaration because the type of the named value
(the iterator) doesn't match the type of the initializer value (the
value list). It also needs to add two values to the foreach record:
the iterator and the value list.
Add parsing support for foreach.
Add the code to process foreach loops and create defs based
on iterator values.
Allow foreach loops to be matched at the top level.
When parsing an IDValue check if it is a foreach loop iterator for one
of the active loops. If so, return a VarInit for it.
Add Emacs keyword support for foreach.
Add VIM keyword support for foreach.
Add tests to check foreach operation.
Add TableGen documentation for foreach.
Support foreach with multiple objects.
Support non-braced foreach body with one object.
Do not require types for the foreach declaration. Assume the iterator
type from the iteration list element type.
llvm-svn: 151164
chip in r139383, and the PSP components of the triple are really
annoying to parse. Let's leave this chapter behind. There is no reason
to expect LLVM to see a PSP-related triple these days, and so no
reasonable motivation to support them.
It might be reasonable to prune a few of the older MIPS triple forms in
general, but as those at least cause no burden on parsing (they aren't
both a chip and an OS!), I'm happy to leave them in for now.
llvm-svn: 151156
Affect on SD scheduling and postRA scheduling:
Printing the DAG will display the nodes in top-down topological order.
This matches the order within the MBB and makes my life much easier in general.
Affect on misched:
We don't need to track virtual register uses at all. This is awesome.
I also intend to rely on the SUnit ID as a topo-sort index. So if A < B then we cannot have an edge B -> A.
llvm-svn: 151135
This makes RAFast 4% faster, and it gets rid of the dodgy DenseMap
iteration.
This also revealed that RAFast would sometimes dereference DenseMap
iterators after erasing other elements from the map. That does seem to
work in the current DenseMap implementation, but SparseSet doesn't allow
it.
llvm-svn: 151111
For objects that can be identified by small unsigned keys, SparseSet
provides constant time clear() and fast deterministic iteration. Insert,
erase, and find operations are typically faster than hash tables.
SparseSet is useful for keeping information about physical registers,
virtual registers, or numbered basic blocks.
llvm-svn: 151110
This test case was way too strict, matching the entire assembly output.
Every non-trivial change to the ppc backend or -O0 pipeline required
the test to be updated.
It should be replaced with a test of the specific vaarg feature.
llvm-svn: 151105
bundles. This method takes a bundle start and an MI being bundled, and makes
the intervals for the MI's operands appear to start/end on the bundle start.
Also fixes some minor cosmetic issues (whitespace, naming convention) in the
HMEditor code.
llvm-svn: 151099
they'll be simple enough to simulate, and to reduce the chance we'll encounter
equal but different simple pointer constants.
This removes the symptoms from PR11352 but is not a full fix. A proper fix would
either require a guarantee that two constant objects we simulate are folded
when equal, or a different way of handling equal pointers (ie., trying a
constantexpr icmp on them to see whether we know they're equal or non-equal or
unsure).
llvm-svn: 151093
This transformation is not safe in some pathological cases (signed icmp of pointers should be an
extremely rare thing, but it's valid IR!). Add an explanatory comment.
Kudos to Duncan for pointing out this edge case (and not giving up explaining it until I finally got it).
llvm-svn: 151055
They're private static methods but we can just make them static
functions in the implementation. It makes the implementations a touch
more wordy, but takes another chunk out of the header file.
Also, take the opportunity to switch the names to the new coding
conventions.
No functionality changed here.
llvm-svn: 151047
Passes after RegAlloc should be able to rely on MRI->getNumVirtRegs() == 0.
This makes sharing code for pre/postRA passes more robust.
Now, to check if a pass is running before the RA pipeline begins, use MRI->isSSA().
To check if a pass is running after the RA pipeline ends, use !MRI->getNumVirtRegs().
PEI resets virtual regs when it's done scavenging.
PTX will either have to provide its own PEI pass or assign physregs.
llvm-svn: 151032
construction. Simplify its interface, implementation, and users
accordingly as there is no longer an 'uninitialized' state to check for.
Also, fixes a bug lurking in the interface as there was one method that
didn't correctly check for initialization.
llvm-svn: 151024
know where users will be added. Because of this, it cannot use
Builder.GetInsertPoint at all.
This patch
* removes the FIXME about adding the assert.
* adds a comment explaining hy we don't have one.
* removes a broken logic that only works for some callers and is not needed
since r150884.
* adds an assert to caller that would have caught the bug fixed by r150884.
llvm-svn: 151015
ecx = mov eax
al = mov ch
The second copy is not a nop because the sub-indices of ecx,ch is not the
same of that of eax/al.
Re-enabled machine-cp.
PR11940
llvm-svn: 151002
- Ignore pointer casts.
- Also expand GEPs that aren't constantexprs when they have one use or only constant indices.
- We now compile "&foo[i] - &foo[j]" into "i - j".
llvm-svn: 150961
the cast. If we do, we can end up with
inst1
--------------- < Insertion point
dbg inst
new inst
instead of the desired
inst1
new inst
--------------- < Insertion point
dbg inst
Another option would be for InsertNoopCastOfTo (or its callers) to move the
insertion point and we would end up with
inst1
dbg inst
new inst
--------------- < Insertion point
but that complicates the callers. This fixes PR12018 (and firefox's build).
llvm-svn: 150884
MRI keeps track of which physregs have been used. Make sure it gets
updated with all the regmask-clobbered registers.
Delete the closePhysRegsUsed() function which isn't necessary.
llvm-svn: 150830
metadata may still unwind, but only in ways that the ARC
optimizer doesn't need to consider. This permits more
aggressive optimization.
llvm-svn: 150829
any changes.
Internally this adds a private inner class HMEditor, to LiveIntervals. HMEditor provides
an API for updating live intervals when code is moved or bundled.
llvm-svn: 150826
This caused miscompilations on out-of-tree targets, and possibly i386 as
well.
I'll find some other way of hoisting %rip-relative loads from loops
containing calls.
llvm-svn: 150816
Fix the type of eh_frame on Solaris so that Sun ld doesn't fail to combine them (thus making it impossible for the unwind library to find them and breaking exceptions).
llvm-svn: 150814
useful to represent a variable that is const in the source but can't be constant
in the IR because of a non-trivial constructor. If globalopt evaluates the
constructor, and there was an invariant.start with no matching invariant.end
possible, it will mark the global constant afterwards.
llvm-svn: 150794
processor, due to the Atom scheduler producing an instruction sequence that is
different from that which is expected.
Patch by Michael Spencer!
llvm-svn: 150736
Call clobbers are now represented with register mask operands. The
regmask can easily represent the fact that xmm6 is call-preserved while
ymm6 isn't. This is automatically computed by TableGen from the
CalleeSavedRegs containing xmm6.
llvm-svn: 150709
Call instructions no longer have a list of 43 call-clobbered registers.
Instead, they get a single register mask operand with a bit vector of
call-preserved registers.
This saves a lot of memory, 42 x 32 bytes = 1344 bytes per call
instruction, and it speeds up building call instructions because those
43 imp-def operands no longer need to be added to use-def lists. (And
removed and shifted and re-added for every explicit call operand).
Passes like LiveVariables, LiveIntervals, RAGreedy, PEI, and
BranchFolding are significantly faster because they can deal with call
clobbers in bulk.
Overall, clang -O2 is between 0% and 8% faster, uniformly distributed
depending on call density in the compiled code. Debug builds using
clang -O0 are 0% - 3% faster.
I have verified that this patch doesn't change the assembly generated
for the LLVM nightly test suite when building with -disable-copyprop
and -disable-branch-fold.
Branch folding behaves slightly differently in a few cases because call
instructions have different hash values now.
Copy propagation flushes its data structures when it crosses a register
mask operand. This causes it to leave a few dead copies behind, on the
order of 20 instruction across the entire nightly test suite, including
SPEC. Fixing this properly would require the pass to use different data
structures.
llvm-svn: 150638
The existing framework for postra scheduling is library local. We want to keep it that way. Soon we will have a more general MachineScheduler interface. At that time, various bits will be exposed to targets. In the meantime, the VLIWPacketizer wants to use ScheduleDAGInstrs directly, so it needs to wrapped in a PIMPL to avoid exposing it to the target interface.
llvm-svn: 150633
method. This allows the target lowering code to not have to deal with MDNodes.
Also, avoid leaking memory like a sieve by not creating a global variable for
the image info section, but just emitting the code directly.
llvm-svn: 150624
Accomplished by moving the body of StringRef::edit_distance into
a separate function that accepts two ArrayRefs, and making
StringRef::edit_distance a wrapper around the new function.
llvm-svn: 150621
The c'tor list is stored as a list of 'void ()*'s, so all of the functions are
bitcast to that. However, the dyn_cast doesn't automagically look through
bitcasts. Do that for it.
<rdar://problem/10813350>
llvm-svn: 150572
The llc command line options for enabling/disabling passes are local to CodeGen/Passes.cpp. This patch associates those options with standard pass IDs so they work regardless of how the target configures the passes.
A target has two ways of overriding standard passes:
1) Redefine the pass pipeline (override TargetPassConfig::add%Stage)
2) Replace or suppress individiual passes with TargetPassConfig::substitutePass.
In both cases, the command line options associated with the pass override the target default.
For example, say a target wants to disable machine instruction scheduling by default:
- The target calls disablePass(MachineSchedulerID) but otherwise does not override any TargetPassConfig methods.
- Without any llc options, no scheduler is run.
- With -enable-misched, the standard machine scheduler is run and honors the -misched=... flag to select the scheduler variant, which may be used for performance evaluation or testing.
Sorry overridePass is ugly. I haven't thought of a better way without replacing the cl::opt framework. I hope to do that one day...
I haven't figured out why CodeGen uses char& for pass IDs. AnalysisID is much easier to use and less bug prone. I'm using it wherever I can for internal implementation. Maybe later we can change the global pass ID definitions as well.
llvm-svn: 150563
Pretend that regmask interference ends at the 'dead' slot, even when
there is other interference ending at the 'reg' slot of the same
instruction.
llvm-svn: 150531