rather than deriving the StringRef from the Start and End SMLocs.
Using the Start and End SMLocs works fine for operands such as [Symbol], but
not for operands such as [Symbol + ImmDisp]. All existing test cases that
reference a variable exercise this patch.
rdar://13602265
llvm-svn: 179109
On PowerPC, non-vector loads and stores have r+i forms; however, in functions
with large stack frames these were not being used to access slots far from the
stack pointer because such slots were out of range for the signed 16-bit
immediate offset field. This increases register pressure because we need a
separate register for each offset (when the r+r form is used). By enabling
virtual base registers, we can deal with large stack frames without unduly
increasing register pressure.
llvm-svn: 179105
The save area is twice as big and there is no struct return slot. The
stack pointer is always 16-byte aligned (after adding the bias).
Also eliminate the stack adjustment instructions around calls when the
function has a reserved stack frame.
llvm-svn: 179083
The costs are overfitted so that I can still use the legalization factor.
For example the following kernel has about half the throughput vectorized than
unvectorized when compiled with SSE2. Before this patch we would vectorize it.
unsigned short A[1024];
double B[1024];
void f() {
int i;
for (i = 0; i < 1024; ++i) {
B[i] = (double) A[i];
}
}
radar://13599001
llvm-svn: 179033
PowerPC has a conditional branch to the link register (return) instruction: BCLR.
This should be used any time when we'd otherwise have a conditional branch to a
return. This adds a small pass, PPCEarlyReturn, which runs just prior to the
branch selection pass (and, importantly, after block placement) to generate
these conditional returns when possible. It will also eliminate unconditional
branches to returns (these happen rarely; most of the time these have already
been tail duplicated by the time PPCEarlyReturn is invoked). This is a nice
optimization for small functions that do not maintain a stack frame.
llvm-svn: 179026
I've managed to convince myself that AArch64's acquire/release
instructions are sufficient to guarantee C++11's required semantics,
even in the sequentially-consistent case.
llvm-svn: 179005
First, we should not cheat: fsel-based lowering of select_cc is a
finite-math-only optimization (the ISA manual, section F.3 of v2.06, makes
this clear, as does a note in our own README).
This also adds fsel-based lowering of EQ and NE condition codes. As it turned
out, fsel generation was covered by a grand total of zero regression test
cases. I've added some test cases to cover the existing behavior (which is now
finite-math only), as well as the new EQ cases.
llvm-svn: 179000
Integer return values are sign or zero extended by the callee, and
structs up to 32 bytes in size can be returned in registers.
The CC_Sparc64 CallingConv definition is shared between
LowerFormalArguments_64 and LowerReturn_64. Function arguments and
return values are passed in the same registers.
The inreg flag is also used for return values. This is required to handle
C functions returning structs containing floats and ints:
struct ifp {
int i;
float f;
};
struct ifp f(void);
LLVM IR:
define inreg { i32, float } @f() {
...
ret { i32, float } %retval
}
The ABI requires that %retval.i is returned in the high bits of %i0
while %retval.f goes in %f1.
Without the inreg return value attribute, %retval.i would go in %i0 and
%retval.f would go in %f3 which is a more efficient way of returning
%multiple values, but it is not ABI compliant for returning C structs.
llvm-svn: 178966
64-bit SPARC v9 processes use biased stack and frame pointers, so the
current function's stack frame is located at %sp+BIAS .. %fp+BIAS where
BIAS = 2047.
This makes more local variables directly accessible via [%fp+simm13]
addressing.
llvm-svn: 178965
There are certain PPC instructions into which we can fold a zero immediate
operand. We can detect such cases by looking at the register class required
by the using operand (so long as it is not otherwise constrained).
llvm-svn: 178961
All arguments are formally assigned to stack positions and then promoted
to floating point and integer registers. Since there are more floating
point registers than integer registers, this can cause situations where
floating point arguments are assigned to registers after integer
arguments that where assigned to the stack.
Use the inreg flag to indicate 32-bit fragments of structs containing
both float and int members.
The three-way shadowing between stack, integer, and floating point
registers requires custom argument lowering. The good news is that
return values are passed in the exact same way, and we can share the
code.
Still missing:
- Update LowerReturn to handle structs returned in registers.
- LowerCall.
- Variadic functions.
llvm-svn: 178958
The code emitter knows how to encode operands whose name matches one of
the encoding fields. If there is no match, the code emitter relies on
the order of the operand and field definitions to determine how operands
should be encoding. Matching by order makes it easy to accidentally break
the instruction encodings, so we prefer to match by name.
Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 178930
SITargetLowering::analyzeImmediate() was converting the 64-bit values
to 32-bit and then checking if they were an inline immediate. Some
of these conversions caused this check to succeed and produced
S_MOV instructions with 64-bit immediates, which are illegal.
v2:
- Clean up logic
Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 178927
On cores for which we know the misprediction penalty, and we have
the isel instruction, we can profitably perform early if conversion.
This enables us to replace some small branch sequences with selects
and avoid the potential stalls from mispredicting the branches.
Enabling this feature required implementing canInsertSelect and
insertSelect in PPCInstrInfo; isel code in PPCISelLowering was
refactored to use these functions as well.
llvm-svn: 178926
The manual states that there is a minimum of 13 cycles from when the
mispredicted branch is issued to when the correct branch target is
issued.
llvm-svn: 178925
During LTO, the target options on functions within the same Module may
change. This would necessitate resetting some of the back-end. Do this for X86,
because it's a Friday afternoon.
llvm-svn: 178917
memory operands.
Essentially, this layers an infix calculator on top of the parsing state
machine. The scale on the index register is still expected to be an immediate
__asm mov eax, [eax + ebx*4]
and will not work with more complex expressions. For example,
__asm mov eax, [eax + ebx*(2*2)]
The plus and minus binary operators assume the numeric value of a register is
zero so as to not change the displacement. Register operands should never
be an operand for a multiply or divide operation; the scale*indexreg
expression is always replaced with a zero on the operand stack to prevent
such a case.
rdar://13521380
llvm-svn: 178881
SSE2 has efficient support for shifts by a scalar. My previous change of making
shifts expensive did not take this into account marking all shifts as expensive.
This would prevent vectorization from happening where it is actually beneficial.
With this change we differentiate between shifts of constants and other shifts.
radar://13576547
llvm-svn: 178808
On certain architectures we can support efficient vectorized version of
instructions if the operand value is uniform (splat) or a constant scalar.
An example of this is a vector shift on x86.
We can efficiently support
for (i = 0 ; i < ; i += 4)
w[0:3] = v[0:3] << <2, 2, 2, 2>
but not
for (i = 0; i < ; i += 4)
w[0:3] = v[0:3] << x[0:3]
This patch adds a parameter to getArithmeticInstrCost to further qualify operand
values as uniform or uniform constant.
Targets can then choose to return a different cost for instructions with such
operand values.
A follow-up commit will test this feature on x86.
radar://13576547
llvm-svn: 178807
BCL is normally a conditional branch-and-link instruction, but has
an unconditional form (which is used in the SjLj code, for example).
To make clear that this BCL instruction definition is specifically
the special unconditional form (which does not meaningfully take
a condition-register input), rename it to BCLalways.
No functionality change intended.
llvm-svn: 178803
The DAGCombine logic that recognized a/sqrt(b) and transformed it into
a multiplication by the reciprocal sqrt did not handle cases where the
sqrt and the division were separated by an fpext or fptrunc.
llvm-svn: 178801
It fixes following tests for Hexagon:
CodeGen/Generic/2003-07-29-BadConstSbyte.ll
CodeGen/Generic/2005-10-21-longlonggtu.ll
CodeGen/Generic/2009-04-28-i128-cmp-crash.ll
CodeGen/Generic/MachineBranchProb.ll
CodeGen/Generic/builtin-expect.ll
CodeGen/Generic/pr12507.ll
llvm-svn: 178794
At the time when the XCore backend was added there were some issues with
with overlapping register classes but these all seem to be fixed now.
Describing the register classes correctly allow us to get rid of a
codegen only instruction (LDAWSP_lru6_RRegs) and it means we can
disassemble ru6 instructions that use registers above r11.
llvm-svn: 178782
The Thumb2SizeReduction pass avoids false CPSR dependencies, except it
still aggressively creates tMOVi8 instructions because they are so
common.
Avoid creating false CPSR dependencies even for tMOVi8 instructions when
the the CPSR flags are known to have high latency. This allows integer
computation to overlap floating point computations.
Also process blocks in a reverse post-order and propagate high-latency
flags to successors.
<rdar://problem/13468102>
llvm-svn: 178773
This requires v9 cmov instructions using the %xcc flags instead of the
%icc flags.
Still missing:
- Select floats on %xcc flags.
- Select i64 on %fcc flags.
llvm-svn: 178737
The default logic does not correctly identify costs of casts because they are
marked as custom on x86.
For some cases, where the shift amount is a scalar we would be able to generate
better code. Unfortunately, when this is the case the value (the splat) will get
hoisted out of the loop, thereby making it invisible to ISel.
radar://13130673
radar://13537826
llvm-svn: 178703
This patch follows up on work done by Bill Schmidt in r178277,
and replaces most of the remaining uses of VRRC in ISEL DAG patterns.
The resulting .inc files are identical except for comments, so
no change in code generation is expected.
llvm-svn: 178656
For this we need to use a libcall. Previously LLVM didn't implement
libcall support for frem, so I've added it in the usual
straightforward manner. A test case from the bug report is included.
llvm-svn: 178639
The same compare instruction is used for 32-bit and 64-bit compares. It
sets two different sets of flags: icc and xcc.
This patch adds a conditional branch instruction using the xcc flags for
64-bit compares.
llvm-svn: 178621
When unsafe FP math operations are enabled, we can use the fre[s] and
frsqrte[s] instructions, which generate reciprocal (sqrt) estimates, together
with some Newton iteration, in order to quickly generate floating-point
division and sqrt results. All of these instructions are separately optional,
and so each has its own feature flag (except for the Altivec instructions,
which are covered under the existing Altivec flag). Doing this is not only
faster than using the IEEE-compliant fdiv/fsqrt instructions, but allows these
computations to be pipelined with other computations in order to hide their
overall latency.
I've also added a couple of missing fnmsub patterns which turned out to be
missing (but are necessary for good code generation of the Newton iterations).
Altivec needs a similar fix, but that will probably be more complicated because
fneg is expanded for Altivec's v4f32.
llvm-svn: 178617
This patch initializes t9 to the handler address, but only if the relocation
model is pic. This handles the case where handler to which eh.return jumps
points to the start of the function.
Patch by Sasa Stankovic.
llvm-svn: 178588
This patch fixes the following two tests which have been failing on
llvm-mips-linux builder since r178403:
LLVM :: Analysis/Profiling/load-branch-weights-ifs.ll
LLVM :: Analysis/Profiling/load-branch-weights-loops.ll
llvm-svn: 178584
qualifiers.
This patch only adds support for parsing these identifiers in the
X86AsmParser. The front-end interface isn't capable of looking up
these identifiers at this point in time. The end result is the
compiler now errors during object file emission, rather than at
parse time. Test case coming shortly.
Part of rdar://13499009 and PR13340
llvm-svn: 178566
When doing a partword atomic operation, a lwarx was being paired with
a stdcx. instead of a stwcx. when compiling for a 64-bit target. The
target has nothing to do with it in this case; we always need a stwcx.
Thanks to Kai Nacke for reporting the problem.
llvm-svn: 178559
The last resort pattern produces 6 instructions, and there are still
opportunities for materializing some immediates in fewer instructions.
llvm-svn: 178526
SPARC v9 defines new 64-bit shift instructions. The 32-bit shift right
instructions are still usable as zero and sign extensions.
This adds new F3_Sr and F3_Si instruction formats that probably should
be used for the 32-bit shifts as well. They don't really encode an
simm13 field.
llvm-svn: 178525
The 'sparc' architecture produces 32-bit code while 'sparcv9' produces
64-bit code.
It is also possible to run 32-bit code using SPARC v9 instructions with:
llc -march=sparc -mattr=+v9
llvm-svn: 178524
This is far from complete, but it is enough to make it possible to write
test cases using i64 arguments.
Missing features:
- Floating point arguments.
- Receiving arguments on the stack.
- Calls.
llvm-svn: 178523
We are going to use the same registers for 32-bit and 64-bit values, but
in two different register classes. The I64Regs register class has a
larger spill size and alignment.
The addition of an i64 register class confuses TableGen's type
inference, so it is necessary to clarify the type of some immediates and
the G0 register.
In 64-bit mode, pointers are i64 and should use the I64Regs register
class. Implement getPointerRegClass() to dynamically provide the pointer
register class depending on the subtarget. Use ptr_rc and iPTR for
memory operands.
Finally, add the i64 type to the IntRegs register class. This register
class is not used to hold i64 values, I64Regs is for that. The type is
required to appease TableGen's type checking in output patterns like this:
def : Pat<(add i64:$a, i64:$b), (ADDrr $a, $b)>;
SPARC v9 uses the same ADDrr instruction for i32 and i64 additions, and
TableGen doesn't know to check the type of register sub-classes.
llvm-svn: 178522
Buffered means a later divide may be executed out-of-order while a
prior divide is sitting (buffered) in a reservation station.
You can tell it's not pipelined, because operations that use it
reserve it for more than one cycle:
def : WriteRes<WriteIDiv, [HWPort0, HWDivider]> {
let Latency = 25;
let ResourceCycles = [1, 10];
}
We don't currently distinguish between an unpipeline operation and one
that is split into multiple micro-ops requiring the same unit. Except
that the later may have NumMicroOps > 1 if they also consume
issue/dispatch resources.
llvm-svn: 178519
The P7 and A2 have additional floating-point conversion instructions which
allow a direct two-instruction sequence (plus load/store) to convert from all
combinations (signed/unsigned i32/i64) <--> (float/double) (on previous cores,
only some combinations were directly available).
llvm-svn: 178480
The popcntw instruction is available whenever the popcntd instruction is
available, and performs a separate popcnt on the lower and upper 32-bits.
Ignoring the high-order count, this can be used for the 32-bit input case
(saving on the explicit zero extension otherwise required to use popcntd).
llvm-svn: 178470
PPCISD::STFIWX is really a memory opcode, and so it should come after
FIRST_TARGET_MEMORY_OPCODE, and we should use DAG.getMemIntrinsicNode to create
nodes using it.
No functionality change intended (although there could be optimization benefits
from preserving the MMO information).
llvm-svn: 178468
Reapply r177968:
After commit 178074 we can now have undefined scheduler variants.
Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define
resource mappings under the CortexA9 SchedModel. Define resources and mappings
for the SwiftModel.
Incooperate Andrew's feedback.
llvm-svn: 178460
ImmToIdxMap should be a DenseMap (not a std::map) because there
is no ordering requirement. Also, we don't need a separate list
of instructions for noImmForm in eliminateFrameIndex, because this
list is essentially the complement of the keys in ImmToIdxMap.
No functionality change intended.
llvm-svn: 178450
This instruction is available on modern PPC64 CPUs, and is now used
to improve the SINT_TO_FP lowering (by eliminating the need for the
separate sign extension instruction and decreasing the amount of
needed stack space).
llvm-svn: 178446
The existing SINT_TO_FP code for i32 -> float/double conversion was disabled
because it relied on broken EXTSW_32/STD_32 instruction definitions. The
original intent had been to enable these 64-bit instructions to be used on CPUs
that support them even in 32-bit mode. Unfortunately, this form of lying to
the infrastructure was buggy (as explained in the FIXME comment) and had
therefore been disabled.
This re-enables this functionality, using regular DAG nodes, but only when
compiling in 64-bit mode. The old STD_32/EXTSW_32 definitions (which were dead)
are removed.
llvm-svn: 178438
'@SECREL' is what is used by the Microsoft assembler, but GNU as expects '@SECREL32'.
With the patch, the MC-generated code works fine in combination with a recent GNU as (2.23.51.20120920 here).
Patch by David Nadlinger!
Differential Revision: http://llvm-reviews.chandlerc.com/D429
llvm-svn: 178427
specific code paths.
This allows us to write code like:
if (__nvvm_reflect("FOO"))
// Do something
else
// Do something else
and compile into a library, then give "FOO" a value at kernel
compile-time so the check becomes a no-op.
llvm-svn: 178416
derived class MipsSETargetLowering.
We shouldn't be generating madd/msub nodes if target is Mips16, since Mips16
doesn't have support for multipy-add/sub instructions.
llvm-svn: 178404
The new instructions have explicit register output operands and use table-gen
patterns instead of C++ code to do instruction selection.
Mips16's instructions are unaffected by this change.
llvm-svn: 178403
Like nearbyint, rint can be implemented on PPC using the frin instruction. The
complication comes from the fact that rint needs to set the FE_INEXACT flag
when the result does not equal the input value (and frin does not do that). As
a result, we use a custom inserter which, after the rounding, compares the
rounded value with the original, and if they differ, explicitly sets the XX bit
in the FPSCR register (which corresponds to FE_INEXACT).
Once LLVM has better modeling of the floating-point environment we should be
able to (often) eliminate this extra complexity.
llvm-svn: 178362
These instructions are available on the P5x (and later) and on the A2. They
implement the standard floating-point rounding operations (floor, trunc, etc.).
One caveat: frin (round to nearest) does not implement "ties to even", and so
is only enabled in fast-math mode.
llvm-svn: 178337
Mips assembler supports macros that allows the OR instruction
to have an immediate parameter. This patch adds an instruction
alias that converts this macro into a Mips ORI instruction.
Contributer: Vladimir Medic
llvm-svn: 178316
- RDRAND always clears the destination value when a random value is not
available (i.e. CF == 0). This value is truncated or zero-extended as
the false boolean value to be returned. Boolean simplification needs
to skip this 'zext' or 'trunc' node.
llvm-svn: 178312
To enable a load of a call address to be folded with that call, this
load is moved from outside of callseq into callseq. Such a moving
adds a non-glued node (that load) into a glued sequence. This non-glue
load is only removed when DAG selection folds them into a memory form
call instruction. When such instruction selection is disabled, it breaks
DAG schedule.
To prevent that, such moving is disabled when target favors register
indirect call.
Previous workaround disabling CALL32m/CALL64m insn selection is removed.
llvm-svn: 178308
Mips assembler allows following to be used as aliased instructions:
jal $rs for jalr $rs
jal $rd,$rd for jalr $rd,$rs
This patch provides alias definitions in td files and test cases to show the usage.
Contributer: Vladimir Medic
llvm-svn: 178304
Compiling in 32-bit mode on a P7 would assert after 64-bit DAG combines were
added for bswap with load/store. This is because these combines are really only
valid in 64-bit mode, regardless of the CPU (and this was not being checked).
llvm-svn: 178286
This follows up Ulrich Weigand's work in PPCInstrInfo.td and
PPCInstr64Bit.td by doing the corresponding work for most of the
Altivec patterns. I have not been able to do anything for the
following classes of instructions:
(1) Vector logicals. These don't have corresponding intrinsics and
don't have a single obvious vector type. So far as I can tell I need
to leave these as VRRC. Affected instructions are: VAND, VANDC,
VNOR, VOR, VXOR, V_SET0.
(2) Instructions that make use of vector shuffle. The selection code
promotes all shuffles to v16i8, so any pattern that matches on a
shuffle is constrained. I haven't found any way to make the patterns
match on their natural types, so I plan to leave these as VRRC.
Affected instructions are: VMRG*, VSPLTB, VSPLTH, VSPLTW, VPKUHUM,
VPKUWUM.
No change in behavior is anticipated.
llvm-svn: 178277
These are 64-bit load/store with byte-swap, and available on the P7 and the A2.
Like the similar instructions for 16- and 32-bit words, these are matched in the
target DAG-combine phase against load/store-bswap pairs.
llvm-svn: 178276
PPC ISA 2.06 (P7, A2, etc.) has a popcntd instruction. Add this instruction and
tell TTI about it so that popcount-loop recognition will know about it.
llvm-svn: 178233
There were a few places where kill flags were not being set correctly, and
where 32-bit instruction variants were being used with 64-bit registers. After
r178180, this code was being triggered causing llc to assert.
llvm-svn: 178220
form of call in preference to memory indirect on Atom.
In this case, the patch applies the optimization to the code for reloading
spilled registers.
The patch also includes changes to sibcall.ll and movgs.ll, which were
failing on the Atom buildbot after the first patch was applied.
This patch by Sriram Murali.
llvm-svn: 178193
These functions should have the same list of load/store instructions. Now that
all load/store forms have been normalized (to single instructions or pseudos)
they can be resynchronized.
Found by inspection, although hopefully this will improve optimization. I've
also added some comments.
llvm-svn: 178180
indirect through a memory address is to load the memory address into
a register and then call indirect through the register.
This patch implements this improvement by modifying SelectionDAG to
force a function address which is a memory reference to be loaded
into a virtual register.
Patch by Sriram Murali.
llvm-svn: 178171
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 178127
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 178126
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 178125
The register parameter in these instructions becomes the base register in an
r+i ld instruction (and, thus, cannot be r0).
This is not yet testable because we don't yet allocate r0 (and even then any
test would be very fragile).
llvm-svn: 178121
Either operand of these pseudo instructions can be transformed into the first
operand of an isel instruction (and this operand cannot be r0).
This is not yet testable because we don't yet allocate r0 (and even when we do,
any test would be very fragile).
llvm-svn: 178119
Like the addi/addis instructions themselves, these pseudo instructions also
cannot have r0 as their register parameter (because it will be interpreted as
the value 0).
This is not yet testable because we don't yet allocate r0 (and even when we do,
any regression test would be very fragile because it would depend on the
register allocator heuristics).
llvm-svn: 178118
Some implementation detail in the forgotten past required the link
register to be placed in the GPRC and G8RC register classes. This is
just wrong on the face of it, and causes several extra intersection
register classes to be generated. I found this was having evil
effects on instruction scheduling, by causing the wrong register class
to be consulted for register pressure decisions.
No code generation changes are expected, other than some minor changes
in instruction order. Seven tests in the test bucket required minor
tweaks to adjust to the new normal.
llvm-svn: 178114
As Bill Schmidt pointed out to me, only on Darwin do we need to spill/restore
VRSAVE in the SjLj code. For non-Darwin, don't spill/restore VRSAVE (and I've
added some asserts to make sure that we're not).
As it turns out, we're not currently handling the Darwin case correctly (I've
added a FIXME in the test case). I've tried adding various implied register
definitions/uses to force the spill without success, so I'll need to address
this later.
llvm-svn: 178096
All Intel CPUs since Yonah look a lot alike, at least at the granularity
of the scheduling models. We can add more accurate models for
processors that aren't Sandy Bridge if required. Haswell will probably
need its own.
The Atom processor and anything based on NetBurst is completely
different. So are the non-Intel chips.
llvm-svn: 178080
As suggested by Bill Schmidt (in reviewing r178067), use the real register
number bit lengths (which is self-documenting, and prevents using illegal
numbers), and set only the relevant bits in HWEncoding (which defaults to 0).
No functionality change intended.
llvm-svn: 178077
As pointed out by Jakob, we don't need to maintain a separate
register-numbering table. Instead we should let TableGen generate the table for
us from the information (already present) in PPCRegisterInfo.td.
TRI->getEncodingValue is now used to access register-encoding values.
No functionality change intended.
llvm-svn: 178067
Now that the register scavenger can support multiple spill slots, and PEI can
use virtual-register-based scavenging for multiple simultaneous registers, we
can use a virtual register for the transfer register in the CR spilling code.
This should eliminate the last place (outside of the prologue/epilogue) where
we depend on the unconditional availability of the r0 register. We will soon be
able to allocate it (in a somewhat restricted sense) as a GPR.
llvm-svn: 178060
PPC's use of PEI's virtual-register-based scavenging functionality had
redefined the virtual registers (it was non-SSA). Now that PEI supports
dealing with instructions with multiple virtual registers, this can be
cleanup up to use multiple virtual registers and keep SSA form.
No functionality change intended.
llvm-svn: 178059
Now all x86 instructions that have itinerary classes also have SchedRW
lists. This is required before the new scheduling models can be used.
There are still unannotated instructions remaining, but they don't have
itinerary classes either.
llvm-svn: 178051
This reverts commit r177968. It is causing failures in a local build bot.
"fatal error: error in backend: Expected a variant SchedClass"
Original commit message:
Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define
resource mappings under the CortexA9 SchedModel. Define resources and mappings
for the SwiftModel.
llvm-svn: 178028
Restore the EXEC mask early, otherwise a copy might end up not beeing executed.
Candidate for the mesa stable branch.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 178018
If PC or SP is the destination, the disassembler erroneously failed with the
invalid encoding, despite the manual saying that both are fine.
This patch addresses failure to decode encoding T4 of LDR (A8.8.62) which is a
postindexed load, where the offset 0xc is applied to SP after the load occurs.
llvm-svn: 178017
There remain a number of patterns that cannot (and should not)
be handled by the asm parser, in particular all the Pseudo patterns.
This commit marks those patterns as isCodeGenOnly.
No change in generated code.
llvm-svn: 178008
MCTargetDesc/PPCMCCodeEmitter.cpp current has code like:
if (isSVR4ABI() && is64BitMode())
Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
(MCFixupKind)PPC::fixup_ppc_toc16));
else
Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
(MCFixupKind)PPC::fixup_ppc_lo16));
This is a problem for the asm parser, since it requires knowledge of
the ABI / 64-bit mode to be set up. However, more fundamentally,
at this point we shouldn't make such distinctions anyway; in an assembler
file, it always ought to be possible to e.g. generate TOC relocations even
when the main ABI is one that doesn't use TOC.
Fortunately, this is actually completely unnecessary; that code was added
to decide whether to generate TOC relocations, but that information is in
fact already encoded in the VariantKind of the underlying symbol.
This commit therefore merges those fixup types into one, and then decides
which relocation to use based on the VariantKind.
No changes in generated code.
llvm-svn: 178007
As part of the the sequence generated to implement long double -> int
conversions, we need to perform an FADD in round-to-zero mode. This is
problematical since the FPSCR is not at all modeled at the SelectionDAG
level, and thus there is a risk of getting floating point instructions
generated out of sequence with the instructions to modify FPSCR.
The current code handles this by somewhat "special" patterns that in part
have dummy operands, and/or duplicate existing instructions, making them
awkward to handle in the asm parser.
This commit changes this by leaving the "FADD in round-to-zero mode"
as an atomic operation on the SelectionDAG level, and only split it up into
real instructions at the MI level (via custom inserter). Since at *this*
level the FPSCR *is* modeled (via the "RM" hard register), much of the
"special" stuff can just go away, and the resulting patterns can be used by
the asm parser.
No significant change in generated code expected.
llvm-svn: 178006
The LDrs pattern is a duplicate of LD, except that it accepts memory
addresses where the displacement is a symbolLo64. An operand type
"memrs" is defined for just that purpose.
However, this wouldn't be necessary if the default "memrix" operand
type were to simply accept 64-bit symbolic addresses directly.
The only problem with that is that it uses "symbolLo", which is
hardcoded to 32-bit.
To fix this, this commit changes "memri" and "memrix" to use new
operand types for the memory displacement, which allow iPTR
instead of i32. This will also make address parsing easier to
implment in the asm parser.
No change in generated code.
llvm-svn: 178005
The ADDI/ADDI8 patterns are currently duplicated into ADDIL/ADDI8L,
which describe the same instruction, except that they accept a
symbolLo[64] operand instead of a s16imm[64] operand.
This duplication confuses the asm parser, and it actually not really
needed, since symbolLo[64] already accepts immediate operands anyway.
So this commit removes the duplicate patterns.
No change in generated code.
llvm-svn: 178004
This commit changes the ISEL patterns to use a CCBITRC operand
instead of a "pred" operand. This matches the actual instruction
text more directly, and simplifies use of ISEL with the asm parser.
In addition, this change allows some simplification of handling
the "pred" operand, as this is now only used by BCC.
No change in generated code.
llvm-svn: 178003
The BLR pattern cannot be recognized by the asm parser in its current form.
This complexity is due to an apparent attempt to enable conditional BLR
variants. However, none of those can ever be generated by current code;
the pattern is only ever created using the default "pred" operand.
To simplify the pattern and allow it to be recognized by the parser,
this commit removes those attempts at conditional BLR support.
When we later come back to actually add real conditional BLR, this
should probably be done via a fully generic conditional branch pattern.
No change in generated code.
llvm-svn: 178002
In PPCInstr64Bit.td, some branch patterns appear in a different sequence
than the corresponding 32-bit patterns in PPCInstrInfo.td.
To simplify future changes that affect both files, this commit moves
those patterns to rearrange them into a similar sequence.
No effect on generated code.
llvm-svn: 178001
Use a MapVector on types where the iteration order matters.
Otherwise we doesn't always produce a deterministic output.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 177999
Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define
resource mappings under the CortexA9 SchedModel. Define resources and mappings
for the SwiftModel.
llvm-svn: 177968
This is very much work in progress. Please send me a note if you start to depend
on the added abstract read/write resources. They are subject to change until
further notice.
The old itinerary is still the default.
llvm-svn: 177967
- It's still considered aligned when the specified alignment is larger
than the natural alignment;
- The new alignment for the high 128-bit vector should be min(16,
alignment) as the pointer is advanced by 16, a power-of-2 offset.
llvm-svn: 177947
All the instructions tagged with IIC_DEFAULT had nothing in common, and
we already have a NoItineraries class to represent untagged
instructions.
llvm-svn: 177937
This commit updates the PowerPC back-end (PPCInstrInfo.td and
PPCInstr64Bit.td) to use types instead of register classes in
instruction patterns, along the lines of Jakob Stoklund Olesen's
changes in r177835 for Sparc.
llvm-svn: 177890
This commit updates the PowerPC back-end (PPCInstrInfo.td and
PPCInstr64Bit.td) to use types instead of register classes in
Pat patterns, along the lines of Jakob Stoklund Olesen's
changes in r177829 for Sparc.
llvm-svn: 177889
sure the base register and would-be writeback register don't conflict for
stores. This was already being done for loads.
Unfortunately, it is rather difficult to create a test case for this issue. It
was exposed in 450.soplex at LTO and requires unlucky register allocation.
<rdar://13394908>
llvm-svn: 177874
In order for the new ZERO register to be used with MC, etc. we need to specify
its register number (0).
Thanks to Kai for reporting the problem!
llvm-svn: 177833
In preparation for using the new register scavenger capability for providing
more than one register simultaneously, specifically note functions that have
spilled VRSAVE (currently, this can happen only in functions that use the
setjmp intrinsic). As with CR spilling, such functions will need to provide two
emergency spill slots to the scavenger.
No functionality change intended.
llvm-svn: 177832
I recently added a BCL instruction definition as part of implementing SjLj
support. This can also be used to MCize bcl emission in the asm printer.
No functionality change intended.
llvm-svn: 177830
The SelectionDAG graph has MVT type labels, not register classes, so
this makes it clearer what is happening.
This notation is also robust against adding more types to the IntRegs
register class.
llvm-svn: 177829
These spilling functions will eventually make use of the register scavenger,
however, they'll do so by taking advantage of PEI's virtual-register-based
delayed scavenging mechanism. As a result, these function parameters will not
be used, and can be removed.
No functionality change intended.
llvm-svn: 177827
The LR register is unconditionally reserved, and its spilling and restoration
is handled by the prologue/epilogue code. As a result, it is never explicitly
spilled by the register allocator.
No functionality change intended.
llvm-svn: 177823
This patch lets the register scavenger make use of multiple spill slots in
order to guarantee that it will be able to provide multiple registers
simultaneously.
To support this, the RS's API has changed slightly: setScavengingFrameIndex /
getScavengingFrameIndex have been replaced by addScavengingFrameIndex /
isScavengingFrameIndex / getScavengingFrameIndices.
In forthcoming commits, the PowerPC backend will use this capability in order
to implement the spilling of condition registers, and some special-purpose
registers, without relying on r0 being reserved. In some cases, spilling these
registers requires two GPRs: one for addressing and one to hold the value being
transferred.
llvm-svn: 177774
We currently have a duplicated set of call instruction patterns depending
on the ABI to be followed (Darwin vs. Linux). This is a bit odd; while the
different ABIs will result in different instruction sequences, the actual
instructions themselves ought to be independent of the ABI. And in fact it
turns out that the only nontrivial difference between the two sets of
patterns is that in the PPC64 Linux ABI, the instruction used for indirect
calls is marked to take X11 as extra input register (which is indeed used
only with that ABI to hold an incoming environment pointer for nested
functions). However, this does not need to be hard-coded at the .td
pattern level; instead, the C++ code expanding calls can simply add that
use, just like it adds uses for argument registers anyway.
No change in generated code expected.
llvm-svn: 177735
Currently, the sub-operand of a memrr address that corresponds to what
hardware considers the base register is called "offreg", while the
sub-operand that corresponds to the offset is called "ptrreg".
To avoid confusion, this patch simply swaps the named of those two
sub-operands and updates all uses. No functional change is intended.
llvm-svn: 177734
PPCTargetLowering::getPreIndexedAddressParts currently provides
the base part of a memory address in the offset result, and the
offset part in the base result. That swap is then undone again
when an MI instruction is generated (in PPCDAGToDAGISel::Select
for loads, and using .md Pat patterns for stores).
This patch reverts this double swap, to make common code and
back-end be in sync as to which part of the address is base
and which is offset.
To avoid performance regressions in certain cases, target code
now checks whether the choice of base register would be rejected
for pre-inc accesses by common code, and attempts to swap base
and offset again in such cases. (Overall, this means that now
pre-ice accesses are generated *more* frequently than before.)
llvm-svn: 177733
The iaddroff ComplexPattern is supposed to recognize displacement
expressions that have been processed by a SelectAddressRegImm,
which means it needs to accept TargetConstant and TargetGlobalAddress
nodes. Currently, it erroneously also accepts some other nodes,
in particular Constant and PPCISD::Lo.
While this problem is currently latent, it would cause wrong-code
bugs with a follow-on patch I'm about to commit, so this patch
tightens the ComplexPattern. The equivalent change is made in
PPCDAGToDAGISel::Select, where pre-inc load patterns are handled
(as opposed to store patterns, the loads are handled in C++ code
without making use of the .td ComplexPattern).
llvm-svn: 177732
The xaddroff pattern is currently (mistakenly) used to recognize
the *base* register in pre-inc store patterns. This patch replaces
those uses by ptr_rc_nor0 (as is elsewhere done to match the base
register of an address), and removes the now unused ComplexPattern.
llvm-svn: 177731
Fixes wrong lighting in some corner cases with r600g and radeonsi, e.g.
manifested by failure of two piglit/glean tests and intermittent black
patches in many apps.
Tested on SI and RS880.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=62012 [radeonsi]
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58150 [r600g]
NOTE: This is a candidate for the Mesa stable branch.
Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 177730
For mips a branch an 18-bit signed offset (the 16-bit
offset field shifted left 2 bits) is added to the
address of the instruction following the branch
(not the branch itself), in the branch delay slot,
to form a PC-relative effective target address.
Previously, the code generator did not perform the
shift of the immediate branch offset which resulted
in wrong instruction opcode. This patch fixes the issue.
Contributor: Vladimir Medic
llvm-svn: 177687
This patch uses the generated instruction info tables to
identify memory/load store instructions.
After successful matching and based on the operand type
and size, it generates additional instructions to the output.
Contributor: Vladimir Medic
llvm-svn: 177685
As Jakob pointed out in his review of r177423, having a shared ZERO
register between the 32- and 64-bit register classes causes this
odd G8RC_NOX0_and_GPRC_NOR0 class to be created. As recommended,
this adds a ZERO8 register which differentiates the 32- and 64-bit
zeros.
No functionality change intended.
llvm-svn: 177683
Thanks to Jakob for isolating the underlying problem from the
test case in r177423. The original commit had introduced
asymmetric copy operations, but these turned out to be a work-around
to the real problem (the use of == instead of hasSubClassEq in PPCCTRLoops).
llvm-svn: 177679
The .set directive in the Mips the assembler can be
used to set the value of a symbol to an expression.
This changes the symbol's value and type to conform
to the expression's.
Syntax: .set symbol, expression
This patch implements the parsing of the above syntax
and enables the parser to use defined symbols when
parsing operands.
Contributor: Vladimir Medic
llvm-svn: 177667
This implements SJLJ lowering on PPC, making the Clang functions
__builtin_{setjmp/longjmp} functional on PPC platforms. The implementation
strategy is similar to that on X86, with the exception that a branch-and-link
variant is used to get the right jump address. Credit goes to Bill Schmidt for
suggesting the use of the unconditional bcl form (instead of the regular bl
instruction) to limit return-address-cache pollution.
Benchmarking the speed at -O3 of:
static jmp_buf env_sigill;
void foo() {
__builtin_longjmp(env_sigill,1);
}
main() {
...
for (int i = 0; i < c; ++i) {
if (__builtin_setjmp(env_sigill)) {
goto done;
} else {
foo();
}
done:;
}
...
}
vs. the same code using the libc setjmp/longjmp functions on a P7 shows that
this builtin implementation is ~4x faster with Altivec enabled and ~7.25x
faster with Altivec disabled. This comparison is somewhat unfair because the
libc version must also save/restore the VSX registers which we don't yet
support.
llvm-svn: 177666
Although there is only one Altivec VRSAVE register, it is a member of
a register class, and we need the ability to spill it. Because this
register is normally callee-preserved and handled by special code this
has never before been necessary. However, this capability will be required by
a forthcoming commit adding SjLj support.
llvm-svn: 177654
The old code used to lower FRAMEADDR tried to replicate the logic in the real
frame-lowering code that determines whether or not the frame pointer (r31) will
be used. When it seemed as through the frame pointer would not be used, the
stack pointer (r1) was used instead. Unfortunately, because the stack size is
not yet known, this does not work. Instead, this change introduces new
always-reserved pseudo-registers (FP and FP8) that are replaced during prologue
insertion with the real frame-pointer register (either r1 or r31).
It is important that this intrinsic always return a valid frame address because
it is used by Clang to store the frame address as part of code generation for
__builtin_setjmp.
llvm-svn: 177653
NEON is not IEEE 754 compliant, so we should avoid lowering single-precision
floating point operations with NEON unless unsafe-math is turned on. The
equivalent VFP instructions are IEEE 754 compliant, but in some cores they're
much slower, so some archs/OSs might still request it to be on by default,
such as Swift and Darwin.
llvm-svn: 177651