target machine from those that are only needed by codegen. The goal is to
sink the essential target description into MC layer so we can start building
MC based tools without needing to link in the entire codegen.
First step is to refactor TargetRegisterInfo. This patch added a base class
MCRegisterInfo which TargetRegisterInfo is derived from. Changed TableGen to
separate register description from the rest of the stuff.
llvm-svn: 133782
parameters if SM >= 2.0
- Update test cases to be more robust against register allocation changes
- Bump up the number of registers to 128 per type
- Include Python script to re-generate register file with any number of
registers
llvm-svn: 133736
"Reinstate r133435 and r133449 (reverted in r133499) now that the clang
self-hosted build failure has been fixed (r133512)."
Due to some additional warnings.
llvm-svn: 133700
If the linker supports it, this will hold the CIE and FDE information in a
compact format. The implementation of the compact unwinding emission is coming
soon.
llvm-svn: 133658
1. (((x) & 0xFF00) >> 8) | (((x) & 0x00FF) << 8)
=> (bswap x) >> 16
2. ((x&0xff)<<8)|((x&0xff00)>>8)|((x&0xff000000)>>8)|((x&0x00ff0000)<<8))
=> (rotl (bswap x) 16)
This allows us to eliminate most of the def : Pat patterns for ARM rev16
revsh instructions. It catches many more cases for ARM and x86.
rdar://9609108
llvm-svn: 133503
The current implementation generates stack loads/stores, which are
really just mov instructions from/to "special" registers. This may
not be the most efficient implementation, compared to an approach where
the stack registers are directly folded into instructions, but this is
easier to implement and I have yet to see a case where ptxas is unable
to see through this kind of register usage and know what is really
going on.
llvm-svn: 133443
Change PHINodes to store simple pointers to their incoming basic blocks,
instead of full-blown Uses.
Note that this loses an optimization in SplitCriticalEdge(), because we
can no longer walk the use list of a BasicBlock to find phi nodes. See
the comment I removed starting "However, the foreach loop is slow for
blocks with lots of predecessors".
Extend replaceAllUsesWith() on a BasicBlock to also update any phi
nodes in the block's successors. This mimics what would have happened
when PHINodes were proper Users of their incoming blocks. (Note that
this only works if OldBB->replaceAllUsesWith(NewBB) is called when
OldBB still has a terminator instruction, so it still has some
successors.)
llvm-svn: 133435
Change various bits of code to make better use of the existing PHINode
API, to insulate them from forthcoming changes in how PHINodes store
their operands.
llvm-svn: 133434
The LSDA is a bit difficult for the non-initiated to read. Even with comments,
it's not always clear what's going on. This wraps the ASM streamer in a class
that retains the LSDA and then emits a human-readable description of what's
going on in it.
So instead of having to make sense of:
Lexception1:
.byte 255
.byte 155
.byte 168
.space 1
.byte 3
.byte 26
Lset0 = Ltmp7-Leh_func_begin1
.long Lset0
Lset1 = Ltmp812-Ltmp7
.long Lset1
Lset2 = Ltmp913-Leh_func_begin1
.long Lset2
.byte 3
Lset3 = Ltmp812-Leh_func_begin1
.long Lset3
Lset4 = Leh_func_end1-Ltmp812
.long Lset4
.long 0
.byte 0
.byte 1
.byte 0
.byte 2
.byte 125
.long __ZTIi@GOTPCREL+4
.long __ZTIPKc@GOTPCREL+4
you can read this instead:
## Exception Handling Table: Lexception1
## @LPStart Encoding: omit
## @TType Encoding: indirect pcrel sdata4
## @TType Base: 40 bytes
## @CallSite Encoding: udata4
## @Action Table Size: 26 bytes
## Action 1:
## A throw between Ltmp7 and Ltmp812 jumps to Ltmp913 on an exception.
## For type(s): __ZTIi@GOTPCREL+4 __ZTIPKc@GOTPCREL+4
## Action 2:
## A throw between Ltmp812 and Leh_func_end1 does not have a landing pad.
llvm-svn: 133286
* rounding modes for fp add, mul, sub now use .rn
* float -> int rounding correctly uses .rzi not .rni
* 32bit fdiv for sm13 uses div.rn (instead of div.approx)
* 32bit fdiv for sm10 now uses div (instead of div.approx)
Approx is not IEEE 754 compatible (and should be optionally set by a flag to the backend instead). The .rn rounding modifier is the PTX default anyway, but it's better to be explicit.
All these modifiers should be available by using __fmul_rz functions for example, but support will need to be added for this in the backend.
Patch by Dan Bailey
llvm-svn: 133253
The reserved R14-R15 are always saved in the prolog, and using CSRs
starting from R13 allows them to be saved in one instruction.
Thanks to Anton for explaining this.
llvm-svn: 133233
Also switch the return type to ArrayRef<unsigned> which works out nicely
for ARM's implementation of this function because of the clever ArrayRef
constructors.
The name change indicates that the returned allocation order may contain
reserved registers as has been the case for a while.
llvm-svn: 133216
This is intended to support using REG_SEQUENCE SDNode's with type MVT::untyped, and is part of the long road to eliminating some of the hacks we currently use to support register pairs and other strange constraints, particularly on ARM NEON.
llvm-svn: 133178
accumulator forwarding. Specifically (from SVN log entry):
Distribute (A + B) * C to (A * C) + (B * C) to make use of NEON multiplier
accumulator forwarding:
vadd d3, d0, d1
vmul d3, d3, d2
=>
vmul d3, d0, d2
vmla d3, d1, d2
Make sure it catches cases where operand 1 is add/fadd/sub/fsub, which was
intended in the original revision.
llvm-svn: 133127
This simplifies many of the target description files since it is common
for register classes to be related or contain sequences of numbered
registers.
I have verified that this doesn't change the files generated by TableGen
for ARM and X86. It alters the allocation order of MBlaze GPR and Mips
FGR32 registers, but I believe the change is benign.
llvm-svn: 133105
optimizations when emitting calls to the function; instead those calls may
use faster relocations which require the function to be immediately resolved
upon loading the dynamic object featuring the call. This is useful when it
is known that the function will be called frequently and pervasively and
therefore there is no merit in delaying binding of the function.
Currently only implemented for x86-64, where it turns into a call through
the global offset table.
Patch by Dan Gohman, who assures me that he's going to add LangRef documentation
for this once it's committed.
llvm-svn: 133080
Note that this actually changes code generation, and someone who
understands this target better should check the changes.
- R12Q is now allocatable. I think it was omitted from the allocation
order by mistake since it isn't reserved. It as apparently used as a
GOT pointer sometimes, and it should probably be reserved if that is
the case.
- The GR64 registers are allocated in a different order now. The
register allocator will automatically put the CSRs last. There were
other changes to the order that may have been significant.
The test fix is because r0 and r1 swapped places in the allocation order.
llvm-svn: 133067
At the time I wrote this code (circa 2007), TargetRegisterInfo was using a std::set to perform these queries. Switching to the static hashtables was an obvious improvement, but in reality there's no reason to do anything other than scan.
With this change, total LLC time on a whole-program 403.gcc is reduced by approximately 1.5%, almost all of which comes from a 15% reduction in LiveVariables time. It also reduces the binary size of LLC by 86KB, thanks to eliminating a bunch of very large static tables.
llvm-svn: 133051
the bits being cleared by the AND are not demanded by the BFI.
The previous BFI dag combine rule was actually incorrect (or used to be
correct until BFI representation changed).
rdar://9609030
llvm-svn: 133034
The logic for reserving R4 for use as a scratch needs to match that for
actually using it. Also, it's not necessary for immediate <=508, so adjust
the value checked.
llvm-svn: 132934
we try to branch to them.
Before we were creating successor lists with duplicated entries. Fixing that
found a bug in isBlockOnlyReachableByFallthrough that would causes it to
return the wrong answer for
-----------
...
jne foo
jmp bar
foo:
----------
llvm-svn: 132882
memcpy/memset symbol doesn't get marked up correctly in PIC modes otherwise.
Should fix llvm-x86_64-linux-checks buildbot. Followup to r132864.
llvm-svn: 132869
causing an assertion failure downstream. This fixes <rdar://problem/9562908>.
This really seems like it should always be set at CCState creation time, so mistakes like
this can never happen. I'll take a look at doing that.
llvm-svn: 132811
The register allocators automatically filter out reserved registers and
place the callee saved registers last in the allocation order, so custom
methods are no longer necessary just for that.
Some targets still use custom allocation orders:
ARM/Thumb: The high registers are removed from GPR in thumb mode. The
NEON allocation orders prefer to use non-VFP2 registers first.
X86: The GR8 classes omit AH-DH in x86-64 mode to avoid REX trouble.
SystemZ: Some of the allocation orders are omitting R12 aliases without
explanation. I don't understand this target well enough to fix that. It
looks like all the boilerplate could be removed by reserving the right
registers.
llvm-svn: 132781