argument of the llvm.dbg.declare/llvm.dbg.value intrinsics.
Previously, DIVariable was a variable-length field that has an optional
reference to a Metadata array consisting of a variable number of
complex address expressions. In the case of OpPiece expressions this is
wasting a lot of storage in IR, because when an aggregate type is, e.g.,
SROA'd into all of its n individual members, the IR will contain n copies
of the DIVariable, all alike, only differing in the complex address
reference at the end.
By making the complex address into an extra argument of the
dbg.value/dbg.declare intrinsics, all of the pieces can reference the
same variable and the complex address expressions can be uniqued across
the CU, too.
Down the road, this will allow us to move other flags, such as
"indirection" out of the DIVariable, too.
The new intrinsics look like this:
declare void @llvm.dbg.declare(metadata %storage, metadata %var, metadata %expr)
declare void @llvm.dbg.value(metadata %storage, i64 %offset, metadata %var, metadata %expr)
This patch adds a new LLVM-local tag to DIExpressions, so we can detect
and pretty-print DIExpression metadata nodes.
What this patch doesn't do:
This patch does not touch the "Indirect" field in DIVariable; but moving
that into the expression would be a natural next step.
http://reviews.llvm.org/D4919
rdar://problem/17994491
Thanks to dblaikie and dexonsmith for reviewing this patch!
llvm-svn: 218778
New function to erase a machine instruction and mark DBG_VALUE
for removal. A DBG_VALUE is marked for removal when it references
an operand defined in the instruction.
Use the new function to cleanup code in dead machine instruction
removal pass.
llvm-svn: 215580
This commit adds scoped noalias metadata. The primary motivations for this
feature are:
1. To preserve noalias function attribute information when inlining
2. To provide the ability to model block-scope C99 restrict pointers
Neither of these two abilities are added here, only the necessary
infrastructure. In fact, there should be no change to existing functionality,
only the addition of new features. The logic that converts noalias function
parameters into this metadata during inlining will come in a follow-up commit.
What is added here is the ability to generally specify noalias memory-access
sets. Regarding the metadata, alias-analysis scopes are defined similar to TBAA
nodes:
!scope0 = metadata !{ metadata !"scope of foo()" }
!scope1 = metadata !{ metadata !"scope 1", metadata !scope0 }
!scope2 = metadata !{ metadata !"scope 2", metadata !scope0 }
!scope3 = metadata !{ metadata !"scope 2.1", metadata !scope2 }
!scope4 = metadata !{ metadata !"scope 2.2", metadata !scope2 }
Loads and stores can be tagged with an alias-analysis scope, and also, with a
noalias tag for a specific scope:
... = load %ptr1, !alias.scope !{ !scope1 }
... = load %ptr2, !alias.scope !{ !scope1, !scope2 }, !noalias !{ !scope1 }
When evaluating an aliasing query, if one of the instructions is associated
with an alias.scope id that is identical to the noalias scope associated with
the other instruction, or is a descendant (in the scope hierarchy) of the
noalias scope associated with the other instruction, then the two memory
accesses are assumed not to alias.
Note that is the first element of the scope metadata is a string, then it can
be combined accross functions and translation units. The string can be replaced
by a self-reference to create globally unqiue scope identifiers.
[Note: This overview is slightly stylized, since the metadata nodes really need
to just be numbers (!0 instead of !scope0), and the scope lists are also global
unnamed metadata.]
Existing noalias metadata in a callee is "cloned" for use by the inlined code.
This is necessary because the aliasing scopes are unique to each call site
(because of possible control dependencies on the aliasing properties). For
example, consider a function: foo(noalias a, noalias b) { *a = *b; } that gets
inlined into bar() { ... if (...) foo(a1, b1); ... if (...) foo(a2, b2); } --
now just because we know that a1 does not alias with b1 at the first call site,
and a2 does not alias with b2 at the second call site, we cannot let inlining
these functons have the metadata imply that a1 does not alias with b2.
llvm-svn: 213864
In order to enable the preservation of noalias function parameter information
after inlining, and the representation of block-level __restrict__ pointer
information (etc.), additional kinds of aliasing metadata will be introduced.
This metadata needs to be carried around in AliasAnalysis::Location objects
(and MMOs at the SDAG level), and so we need to generalize the current scheme
(which is hard-coded to just one TBAA MDNode*).
This commit introduces only the necessary refactoring to allow for the
introduction of other aliasing metadata types, but does not actually introduce
any (that will come in a follow-up commit). What it does introduce is a new
AAMDNodes structure to hold all of the aliasing metadata nodes associated with
a particular memory-accessing instruction, and uses that structure instead of
the raw MDNode* in AliasAnalysis::Location, etc.
No functionality change intended.
llvm-svn: 213859
1) Fix for printing debug locations for absolute paths.
2) Location printing is moved into public method DebugLoc::print() to avoid re-inventing the wheel.
Differential Revision: http://reviews.llvm.org/D3513
llvm-svn: 208177
- Adds support for inserting vzerouppers before tail-calls.
This is enabled implicitly by having MachineInstr::copyImplicitOps preserve
regmask operands, which allows VZeroUpperInserter to see where tail-calls use
vector registers.
- Fixes a bug that caused the previous version of this optimization to miss some
vzeroupper insertion points in loops. (Loops-with-vector-code that followed
loops-without-vector-code were mistakenly overlooked by the previous version).
- New algorithm never revisits instructions.
Fixes <rdar://problem/16228798>
llvm-svn: 204021
The old system was fairly convoluted:
* A temporary label was created.
* A single PROLOG_LABEL was created with it.
* A few MCCFIInstructions were created with the same label.
The semantics were that the cfi instructions were mapped to the PROLOG_LABEL
via the temporary label. The output position was that of the PROLOG_LABEL.
The temporary label itself was used only for doing the mapping.
The new CFI_INSTRUCTION has a 1:1 mapping to MCCFIInstructions and points to
one by holding an index into the CFI instructions of this function.
I did consider removing MMI.getFrameInstructions completelly and having
CFI_INSTRUCTION own a MCCFIInstruction, but MCCFIInstructions have non
trivial constructors and destructors and are somewhat big, so the this setup
is probably better.
The net result is that we don't create temporary labels that are never used.
llvm-svn: 203204
operand into the Value interface just like the core print method is.
That gives a more conistent organization to the IR printing interfaces
-- they are all attached to the IR objects themselves. Also, update all
the users.
This removes the 'Writer.h' header which contained only a single function
declaration.
llvm-svn: 198836
are part of the core IR library in order to support dumping and other
basic functionality.
Rename the 'Assembly' include directory to 'AsmParser' to match the
library name and the only functionality left their -- printing has been
in the core IR library for quite some time.
Update all of the #includes to match.
All of this started because I wanted to have the layering in good shape
before I started adding support for printing LLVM IR using the new pass
infrastructure, and commandline support for the new pass infrastructure.
llvm-svn: 198688
The greedy register allocator tries to split a live-range around each
instruction where it is used or defined to relax the constraints on the entire
live-range (this is a last chance split before falling back to spill).
The goal is to have a big live-range that is unconstrained (i.e., that can use
the largest legal register class) and several small local live-range that carry
the constraints implied by each instruction.
E.g.,
Let csti be the constraints on operation i.
V1=
op1 V1(cst1)
op2 V1(cst2)
V1 live-range is constrained on the intersection of cst1 and cst2.
tryInstructionSplit relaxes those constraints by aggressively splitting each
def/use point:
V1=
V2 = V1
V3 = V2
op1 V3(cst1)
V4 = V2
op2 V4(cst2)
Because of how the coalescer infrastructure works, each new variable (V3, V4)
that is alive at the same time as V1 (or its copy, here V2) interfere with V1.
Thus, we end up with an uncoalescable copy for each split point.
To make tryInstructionSplit less aggressive, we check if the split point
actually relaxes the constraints on the whole live-range. If it does not, we do
not insert it.
Indeed, it will not help the global allocation problem:
- V1 will have the same constraints.
- V1 will have the same interference + possibly the newly added split variable
VS.
- VS will produce an uncoalesceable copy if alive at the same time as V1.
<rdar://problem/15570057>
llvm-svn: 198369
This optional register liveness analysis pass can be enabled with either
-enable-stackmap-liveness, -enable-patchpoint-liveness, or both. The pass
traverses each basic block in a machine function. For each basic block the
instructions are processed in reversed order and if a patchpoint or stackmap
instruction is encountered the current live-out register set is encoded as a
register mask and attached to the instruction.
Later on during stackmap generation the live-out register mask is processed and
also emitted as part of the stackmap.
This information is optional and intended for optimization purposes only. This
will enable a client of the stackmap to reason about the registers it can use
and which registers need to be preserved.
Reviewed by Andy
llvm-svn: 197317
This reverts commit r197254.
This was an accidental merge of Juergen's patch. It will be checked in
shortly, but wasn't meant to go in quite yet.
Conflicts:
include/llvm/CodeGen/StackMaps.h
lib/CodeGen/StackMaps.cpp
test/CodeGen/X86/stackmap-liveness.ll
llvm-svn: 197260
No functionality change.
It should suffice to check the type of a debug info metadata, instead of
calling Verify. For cases where we know the type of a DI metadata, use
assert.
Also update testing cases to make them conform to the format of DI classes.
llvm-svn: 185135
All targets are now adding return value registers as implicit uses on
return instructions, and there is no longer a need for the live out
lists.
llvm-svn: 174417
- recognize string "{memory}" in the MI generation
- mark as mayload/maystore when there's a memory clobber constraint.
PR14859.
Patch by Krzysztof Parzyszek
llvm-svn: 172228
When calling hasProperty() on an instruction inside a bundle, it should
always behave as if IgnoreBundle was passed, and just return properties
for the current instruction.
Only attempt to aggregate bundle properties whan asked about the bundle
header.
The assertion fires on existing ARM test cases without this fix.
llvm-svn: 172082
It is possible to build MI bundles that don't begin with a BUNDLE
header. Add support for such bundles, counting all instructions inside
the bundle.
llvm-svn: 171985
The series of patches leading up to this one makes llc -O0 run 8% faster.
When deallocating a MachineFunction, there is no need to visit all
MachineInstr and MachineOperand objects to deallocate them. All their
memory come from a BumpPtrAllocator that is about to be purged, and they
have empty destructors anyway.
This only applies when deallocating the MachineFunction.
DeleteMachineInstr() should still be used to recycle MI memory during
the codegen passes.
Remove the LeakDetector support for MachineInstr. I've never seen it
used before, and now it definitely doesn't work. With this patch, leaked
MachineInstrs would be much less of a problem since all of their memory
will be reclaimed by ~MachineFunction().
llvm-svn: 171599
Instead of an std::vector<MachineOperand>, use MachineOperand arrays
from an ArrayRecycler living in MachineFunction.
This has several advantages:
- MachineInstr now has a trivial destructor, making it possible to
delete them in batches when destroying MachineFunction. This will be
enabled in a later patch.
- Bypassing malloc() and free() can be faster, depending on the system
library.
- MachineInstr objects and their operands are allocated from the same
BumpPtrAllocator, so they will usually be next to each other in
memory, providing better locality of reference.
- Reduce MachineInstr footprint. A std::vector is 24 bytes, the new
operand array representation only uses 8+4+1 bytes in MachineInstr.
- Better control over operand array reallocations. In the old
representation, the use-def chains would be reordered whenever a
std::vector reached its capacity. The new implementation never changes
the use-def chain order.
Note that some decisions in the code generator depend on the use-def
chain orders, so this patch may cause different assembly to be produced
in a few cases.
llvm-svn: 171598
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.
There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.
The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.
I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).
I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.
llvm-svn: 171366
Instructions that are inserted in a basic block can still be decorated
with addOperand(MO).
Make the two-argument addOperand() function contain the actual
implementation. This function will now always have a valid MF reference
that it can use for memory allocation.
llvm-svn: 170798
This function is often used to decorate dangling instructions, so a
context reference is required to allocate memory for the operands.
Also add a corresponding MachineInstrBuilder method.
llvm-svn: 170797
The bundle_iterator::operator++ function now doesn't need to dig out the
basic block and check against end(). It can use the isBundledWithSucc()
flag to find the last bundled instruction safely.
Similarly, MachineInstr::isBundled() no longer needs to look at
iterators etc. It only has to look at flags.
llvm-svn: 170473
The bundle-related MI flags need to be kept in sync with the neighboring
instructions. Don't allow the bulk flag-setting setFlags() function to
change them.
Also don't copy MI flags when cloning an instruction. The clone's bundle
flags will be set when it is explicitly inserted into a bundle.
llvm-svn: 170459
Most code is oblivious to bundles and uses the MBB::iterator which only
visits whole bundles. MBB::erase() operates on whole bundles at a time
as before.
MBB::remove() now refuses to remove bundled instructions. It is not safe
to remove all instructions in a bundle without deleting them since there
is no way of returning pointers to all the removed instructions.
MBB::remove_instr() and MBB::erase_instr() will now update bundle flags
correctly, lifting individual instructions out of bundles while leaving
the remaining bundle intact.
The MachineInstr convenience functions are updated so
eraseFromParent() erases a whole bundle as before
eraseFromBundle() erases a single instruction, leaving the rest of its bundle.
removeFromParent() refuses to operate on bundled instructions, and
removeFromBundle() lifts a single instruction out of its bundle.
These functions will no longer accidentally split or coalesce bundles -
bundle flags are updated to preserve the existing bundling, and explicit
bundleWith* / unbundleFrom* functions should be used to change the
instruction bundling.
This API update is still a work in progress. I am going to update APIs
first so they maintain bundle flags automatically when possible. Then
I'll add stricter verification of the bundle flags.
llvm-svn: 170384
This is still a work in progress. The purpose is to make bundling and
unbundling operations explicit, and to catch errors where bundles are
broken or created inadvertently.
The old IsInsideBundle flag is replaced by two MI flags: BundledPred
which has the same meaning as IsInsideBundle, and BundledSucc which is
set on instructions that are bundled with a successor. Having two flags
provdes redundancy to detect when a bundle is inadvertently torn by a
splice() or insert(), and it makes it possible to write bundle iterators
that don't need to peek at adjacent instructions.
The new flags can't be manipulated directly (once setIsInsideBundle is
gone). Instead there are MI functions to make and break bundle bonds.
The setIsInsideBundle function will be removed in a future commit. It
should be replaced by bundleWithPred().
llvm-svn: 169583
A MachineInstr can only ever be constructed by CreateMachineInstr() and
CloneMachineInstr(), and those factories don't use the removed
constructors.
llvm-svn: 169395
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.
Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]
llvm-svn: 169131
- BlockAddress has no support of BA + offset form and there is no way to
propagate that offset into machine operand;
- Add BA + offset support and a new interface 'getTargetBlockAddress' to
simplify target block address forming;
- All targets are modified to use new interface and X86 backend is enhanced to
support BA + offset addressing.
llvm-svn: 163743
The MachineOperand::TiedTo field was maintained, but not used.
This patch enables it in isRegTiedToDefOperand() and
isRegTiedToUseOperand() which are the actual functions use by the
register allocator.
llvm-svn: 163153
After much agonizing, use a full 4 bits of precious MachineOperand space
to encode this. This uses existing padding, and doesn't grow
MachineOperand beyond its current 32 bytes.
This allows tied defs among the first 15 operands on a normal
instruction, just like the current MCInstrDesc constraint encoding.
Inline assembly needs to be able to tie more than the first 15 operands,
and gets special treatment.
Tied uses can appear beyond 15 operands, as long as they are tied to a
def that's in range.
llvm-svn: 163151
Manage tied operands entirely internally to MachineInstr. This makes it
possible to change the representation of tied operands, as I will do
shortly.
The constraint that tied uses and defs must be in the same order was too
restrictive.
llvm-svn: 163021
When a MachineInstr is constructed, its implicit operands are added
first, then the explicit operands are inserted before the implicits.
MCInstrDesc has oprand flags like early clobber and operand ties that
apply to the explicit operands.
Don't look at those flags when the implicit operands are first added in
the explicit operands's positions.
llvm-svn: 162910
Ordered memory operations are more constrained than volatile loads and
stores because they must be ordered with respect to all other memory
operations.
llvm-svn: 162861
It is technically allowed to move a normal load across a volatile load,
but probably not a good idea.
It is not allowed to move a load across an atomic load with
Ordering > Monotonic, and we model those with MOVolatile as well.
I recently removed the mayStore flag from atomic load instructions, so
they don't need a pseudo-opcode. This patch makes up for the difference.
llvm-svn: 162857
The isTied bit is set automatically when a tied use is added and
MCInstrDesc indicates a tied operand. The tie is broken when one of the
tied operands is removed.
llvm-svn: 162814
While in SSA form, a MachineInstr can have pairs of tied defs and uses.
The tied operands are used to represent read-modify-write operands that
must be assigned the same physical register.
Previously, tied operand pairs were computed from fixed MCInstrDesc
fields, or by using black magic on inline assembly instructions.
The isTied flag makes it possible to add tied operands to any
instruction while getting rid of (some of) the inlineasm magic.
Tied operands on normal instructions are needed to represent predicated
individual instructions in SSA form. An extra <tied,imp-use> operand is
required to represent the output value when the instruction predicate is
false.
Adding a predicate to:
%vreg0<def> = ADD %vreg1, %vreg2
Will look like:
%vreg0<tied,def> = ADD %vreg1, %vreg2, pred:3, %vreg7<tied,imp-use>
The virtual register %vreg7 is the value given to %vreg0 when the
predicate is false. It will be assigned the same physreg as %vreg0.
This commit adds the isTied flag and sets it based on MCInstrDesc when
building an instruction. The flag is not used for anything yet.
llvm-svn: 162774
Register operands are manipulated by a lot of target-independent code,
and it is not always possible to preserve target flags. That means it is
not safe to use target flags on register operands.
None of the targets in the tree are using register operand target flags.
External targets should be using immediate operands to annotate
instructions with operand modifiers.
llvm-svn: 162770
Register MachineOperands are kept in linked lists accessible via MRI's
reg_iterator interfaces. The linked list management was handled partly
by MachineOperand methods, partly by MRI methods.
Move all of the list management into MRI, delete
MO::AddRegOperandToRegInfo() and MO::RemoveRegOperandFromRegInfo().
Be more explicit about handling the cases where an MRI pointer isn't
available.
llvm-svn: 161632
A target index operand looks a lot like a constant pool reference, but
it is completely target-defined. It contains the 8-bit TargetFlags, a
32-bit index, and a 64-bit offset. It is preserved by all code generator
passes.
TargetIndex operands can be used to carry target-specific information in
cases where immediate operands won't suffice.
llvm-svn: 161441
hash_value overload for MachineOperands. This addresses a FIXME
sufficient for me to remove it, and cleans up the code nicely too.
The important changes to the hashing logic:
- TargetFlags are now included in all of the hashes. These were complete
missed.
- Register operands have their subregisters and whether they are a def
included in the hash.
- We now actually hash all of the operand types. Previously, many
operand types were simply *dropped on the floor*. For example:
- Floating point immediates
- Large integer immediates (>64-bit)
- External globals!
- Register masks
- Metadata operands
- It removes the offset from the block-address hash; I'm a bit
suspicious of this, but isIdenticalTo doesn't consider the offset for
black addresses.
Any patterns involving these entities could have triggered extreme
slowdowns in MachineCSE or PHIElimination. Let me know if there are PRs
you think might be closed now... I'm looking myself, but I may miss
them.
llvm-svn: 159743
broken. This patch fixes the superficial problems which lead to the
intractably slow compile times reported in PR13225.
The specific issue is that we were failing to include the *offset* of
a global variable in the hash code. Oops. This would in turn cause all
MIs which were only distinguishable due to operating on different
offsets of a global variable to produce identical hash functions. In
some of the test cases attached to the PR I saw hash table activity
where there were O(1000) probes-per-lookup *on average*. A very few
entries were responsible for most of these probes.
There is still quite a bit more to do here. The ad-hoc layering of data
in MachineOperands makes them *extremely* brittle to hash correctly.
We're missing quite a few other cases, the only ones I've fixed here are
the specific MO types which were allowed through the assert() in
getOffset().
llvm-svn: 159741
Also allow trailing register mask operands on non-variadic both
MachineSDNodes and MachineInstrs.
The extra physreg RegisterSDNode operands are added to the MI as
<imp-use> operands. This makes it possible to have non-variadic call
instructions.
Call and return instructions really are non-variadic, the argument
registers should only be used implicitly - they are not part of the
encoding.
llvm-svn: 159727
include/llvm/Analysis/DebugInfo.h to include/llvm/DebugInfo.h.
The reasoning is because the DebugInfo module is simply an interface to the
debug info MDNodes and has nothing to do with analysis.
llvm-svn: 159312
No functional change intended.
Sorry for the churn. The iterator classes are supposed to help avoid
giant commits like this one in the future. The TableGen-produced
register lists are getting quite large, and it may be necessary to
change the table representation.
This makes it possible to do so without changing all clients (again).
llvm-svn: 157854
The getPointerRegClass() hook can return register classes that depend on
the calling convention of the current function (ptr_rc_tailcall).
So far, we have been able to infer the calling convention from the
subtarget alone, but as we add support for multiple calling conventions
per target, that no longer works.
Patch by Yiannis Tsiouris!
llvm-svn: 156328
The <undef> flag on a def operand only applies to partial register
redefinitions. Only print the flag when relevant, and print it as
<def,read-undef> to make it clearer what it means.
llvm-svn: 155239
This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.
llvm-svn: 153817
This one is particularly annoying because the hashing algorithm is
highly specialized, with a strange "equivalence" definition that subsets
the fields involved.
Still, this looks at the exact same set of data as the old code, but
without bitwise or-ing over parts of it and other mixing badness. No
functionality changed here. I've left a substantial fixme about the fact
that there is a cleaner and more principled way to do this, but it
requires making the equality definition actual stable for particular
types...
llvm-svn: 152218
This allows the function to be inlined, and makes it suitable for use in
getInstructionIndex().
Also provide a const version. C++ is great for touch typing practice.
llvm-svn: 151782
Only accept register masks when looking for an 'overlapping' def. When
Overlap is not set, the function searches for a proper definition of
Reg.
This means MI->modifiesRegister() considers register masks, but
MI->definesRegister() doesn't.
llvm-svn: 150529
Make them accessible through MCInstrInfo. They are only used for debugging purposes so this doesn't
have an impact on performance. X86MCTargetDesc.o goes from 630K to 461K on x86_64.
llvm-svn: 150245
Calls that use register mask operands don't have implicit defs for
returned values. The register mask operand handles the call clobber,
but it always behaves like a set of dead defs.
Add live implicit defs for any implicitly defined physregs that are
actually used.
llvm-svn: 149715
SelectionDAG has 4 different ways of passing physreg defs to users.
Collect all of the uses at the same time, and pass all of them to
MI->setPhysRegsDeadExcept() to mark the remaining defs dead.
The setPhysRegsDeadExcept() function will soon add the required
implicit-defs to instructions with register mask operands.
llvm-svn: 149708
The live range of the source register may be extended when a redundant
copy is eliminated. Make sure any kill flags between the two copies are
cleared.
This fixes PR11765.
llvm-svn: 149069
Register masks will be used as a compact representation of large clobber
lists. Currently, an x86 call instruction has some 40 operands
representing call-clobbered registers. That's more than 1kB of useless
operands per call site.
A register mask operand references a bit mask of call-preserved
registers, everything else is clobbered. The bit mask will typically
come from TargetRegisterInfo::getCallPreservedMask().
By abandoning ImplicitDefs for call-clobbered registers, it also becomes
possible to share call instruction descriptions between calling
conventions, and we can get rid of the WINCALL* instructions.
This patch introduces the new operand kind. Future patches will add
RegMask support to target-independent passes before finally the fixed
clobber lists can be removed from call instruction descriptions.
llvm-svn: 148250
of several newly un-defaulted switches. This also helps optimizers
(including LLVM's) recognize that every case is covered, and we should
assume as much.
llvm-svn: 147861
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
prevent IT blocks from being broken apart.
llvm-svn: 146542
clients to decide whether to look inside bundled instructions and whether
the query should return true if any / all bundled instructions have the
queried property.
llvm-svn: 146168
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.
For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.
llvm-svn: 146026
This flag is used when bundling machine instructions. It indicates
whether the operand reads a value defined inside or outside its bundle.
llvm-svn: 145997
1. Added opcode BUNDLE
2. Taught MachineInstr class to deal with bundled MIs
3. Changed MachineBasicBlock iterator to skip over bundled MIs; added an iterator to walk all the MIs
4. Taught MachineBasicBlock methods about bundled MIs
llvm-svn: 145975
When this field is true it means that the load is from constant (runt-time or compile-time) and so can be hoisted from loops or moved around other memory accesses
llvm-svn: 144100
Most instructions have some requirements for their register operands.
Usually, this is expressed as register class constraints in the
MCInstrDesc, but for inline assembly the constraints are encoded in the
flag words.
llvm-svn: 141835
The inline asm operand constraint is initially encoded in the virtual
register for the operand, but that register class may change during
coalescing, and the original constraint is lost.
Encode the original register class as part of the flag word for each
inline asm operand. This makes it possible to recover the actual
constraint required by inline asm, just like we can for normal
instructions.
llvm-svn: 141833
The function needs to scan the implicit operands anyway, so no
performance is won by caching the number of implicit operands added to
an instruction.
This also fixes a bug when adding operands after an implicit operand has
been added manually. The NumImplicitOps count wasn't kept up to date.
MachineInstr::addOperand() will now consistently place all explicit
operands before all the implicit operands, regardless of the order they
are added. It is possible to change an MI opcode and add additional
explicit operands. They will be inserted before any existing implicit
operands.
The only exception is inline asm instructions where operands are never
reordered. This is because of a hack that marks explicit clobber regs
on inline asm as <implicit-def> to please the fast register allocator.
This hack can go away when InstrEmitter and FastIsel can add exact
<dead> flags to physreg defs.
llvm-svn: 140744
Normally, a partial register def is treated as reading the
super-register unless it also defines the full register like this:
%vreg110:sub_32bit<def> = COPY %vreg77:sub_32bit, %vreg110<imp-def>
This patch also uses the <undef> flag on partial defs to recognize
non-reading operands:
%vreg110:sub_32bit<def,undef> = COPY %vreg77:sub_32bit
This fixes a subtle bug in RegisterCoalescer where LIS->shrinkToUses
would treat a coalesced copy as still reading the register, extending
the live range artificially.
My test case only works when I disable DCE so a dead copy is left for
RegisterCoalescer, so I am not including it.
<rdar://problem/9967101>
llvm-svn: 138018
DBG_VALUE 3.310000e+02, 0, !"ds"; dbg:sse.stepfft.c:138:18 @[ sse.stepfft.c:32:10 ]
DBG_VALUE 3.310000e+02, 0, !"ds"; dbg:sse.stepfft.c:138:18 @[ sse.stepfft.c:31:10 ]
These two MIs represent identical value, 3.31..., for one variable, ds, but they are not identical because the represent two separate instances of inlined variable "ds".
llvm-svn: 134620
Add a MI->emitError() method that the backend can use to report errors
related to inline assembly. Call it from X86FloatingPoint.cpp when the
constraints are wrong.
This enables proper clang diagnostics from the backend:
$ clang -c pr30848.c
pr30848.c:5:12: error: Inline asm output regs must be last on the x87 stack
__asm__ ("" : "=u" (d)); /* { dg-error "output regs" } */
^
1 error generated.
llvm-svn: 134307
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.
llvm-svn: 134021
Both become <earlyclobber> defs on the INLINEASM MachineInstr, but we
now use two different asm operand kinds.
The new Kind_Clobber is treated identically to the old
Kind_RegDefEarlyClobber for now, but x87 floating point stack inline
assembly does care about the difference.
This will pop a register off the stack:
asm("fstp %st" : : "t"(x) : "st");
While this will pop the input and push an output:
asm("fst %st" : "=&t"(r) : "t"(x));
We need to know if ST0 was a clobber or an output operand, and we can't
depend on <dead> flags for that.
llvm-svn: 133902
The INLINEASM MachineInstrs have an immediate operand describing each
original inline asm operand. Decode the bits in MachineInstr::print() so
it is easier to read:
INLINEASM <es:rorq $1,$0>, $0:[regdef], %vreg0<def>, %vreg1<def>, $1:[imm], 1, $2:[reguse] [tiedto:$0], %vreg2, %vreg3, $3:[regdef-ec], %EFLAGS<earlyclobber,imp-def>
llvm-svn: 133901
It can happen that a live debug variable is the last use of a sub-register, and
the register allocator will pick a larger register class for the virtual
register. If the allocated register doesn't support the sub-register index,
just use %noreg for the debug variables instead of asserting.
In PR9872, a debug variable ends up in the sub_8bit_hi part of a GR32_ABCD
register. The register is split and one part is inflated to GR32 and assigned
%ESI because there are no more normal uses of sub_8bit_hi.
Since %ESI doesn't have that sub-register, substPhysReg asserted. Now it will
simply insert a %noreg instead, and the debug variable will be marked
unavailable in that range.
We don't currently have a way of saying: !"value" is in bits 8-15 of %ESI, I
don't know if DWARF even supports that.
llvm-svn: 131073
There can be multiple defs for a single virtual register when they are defining
sub-registers.
The missing <dead> flag was stopping the inline spiller from eliminating dead
code after rematerialization.
llvm-svn: 128888
Print virtual registers numbered from 0 instead of the arbitrary
FirstVirtualRegister. The first virtual register is printed as %vreg0.
TRI::NoRegister is printed as %noreg.
llvm-svn: 123107
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.
This allows memory instructions to be moved around INLINEASM instructions.
llvm-svn: 123044
Pull an unsigned out of the Contents union such that it has the same size as two
pointers and no padding.
Arrange members such that the Contents union and all pointers can be 8-byte
aligned without padding.
This speeds up code generation by 0.8% on a 64-bit host. 32-bit hosts should be
unaffected.
llvm-svn: 116857
"getFixedStack" on the MachinePointerInfo class. While
this isn't the problem I'm setting out to solve, it is the
right way to eliminate PseudoSourceValue, so lets go with it.
llvm-svn: 114406
MachinePointerInfo struct, no functionality change.
This also adds an assert to MachineMemOperand::MachineMemOperand
that verifies that the Value* is either null or is an IR pointer type.
llvm-svn: 114389