new VariantKind to the MCSymbolExpr seems like overkill, but I'm not sure
there's a more straightforward way to get the printing difference captured.
(i.e., x86 uses @PLT, ARM uses (PLT)).
llvm-svn: 114613
CombineTo to avoid putting the result on the worklist. I don't think it makes
much difference for now, but it might help someday as we add more DAG
combine optimizations.
llvm-svn: 114595
of those. Refactor to share code for handling BUILD_VECTOR(VMOVRRD).
I don't have a testcase that exercises this, but it seems like an obvious
good thing to do.
llvm-svn: 114589
value should be in GPRs when it's going to be used as a scalar, and we use
VMOVRRD to make that happen, but if the value is converted back to a vector
we need to fold to a simple bit_convert. Radar 8407927.
llvm-svn: 114233
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.
llvm-svn: 113570
vabd intrinsic and add and/or zext operations. In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests. Auto-upgrade the old intrinsics.
llvm-svn: 112941
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests. Add auto-upgrade support for the old intrinsics.
llvm-svn: 112773
comparison that would overflow.
- The other under/overflow cases can't actually happen because the immediates
which would trigger them are legal (so we don't enter this code), but
adjusted the style to make it clear the transform is always valid.
llvm-svn: 112053
float t1(int argc) {
return (argc == 1123) ? 1.234f : 2.38213f;
}
We would generate truly awful code on ARM (those with a weak stomach should look
away):
_t1:
movw r1, #1123
movs r2, #1
movs r3, #0
cmp r0, r1
mov.w r0, #0
it eq
moveq r0, r2
movs r1, #4
cmp r0, #0
it ne
movne r3, r1
adr r0, #LCPI1_0
ldr r0, [r0, r3]
bx lr
The problem was that legalization was creating a cascade of SELECT_CC nodes, for
for the comparison of "argc == 1123" which was fed into a SELECT node for the ?:
statement which was itself converted to a SELECT_CC node. This is because the
ARM back-end doesn't have custom lowering for SELECT nodes, so it used the
default "Expand".
I added a fairly simple "LowerSELECT" to the ARM back-end. It takes care of this
testcase, but can obviously be expanded to include more cases.
Now we generate this, which looks optimal to me:
_t1:
movw r1, #1123
movs r2, #0
cmp r0, r1
adr r0, #LCPI0_0
it eq
moveq r2, #4
ldr r0, [r0, r2]
bx lr
.align 2
LCPI0_0:
.long 1075344593 @ float 2.382130e+00
.long 1067316150 @ float 1.234000e+00
llvm-svn: 110799
memory and synchronization barrier dmb and dsb instructions.
- Change instruction names to something more sensible (matching name of actual
instructions).
- Added tests for memory barrier codegen.
llvm-svn: 110785
Also added a test case to check for the added benefit of this patch: it's optimizing away the unnecessary restore of sp from fp for some non-leaf functions.
llvm-svn: 110707
reserved, not available for general allocation. This eliminates all the
extra checks for Darwin.
This change also fixes the use of FP to access frame indices in leaf
functions and cleaned up some confusing code in epilogue emission.
llvm-svn: 110655
Add support for using the FPSCR in conjunction with the vcvtr instruction, for controlling fp to int rounding.
Add support for the FLT_ROUNDS_ node now that the FPSCR is exposed.
llvm-svn: 110152
integers with mov + vdup. 8003375. This is
currently disabled by default because LICM will
not hoist a VDUP, so it pessimizes the code if
the construct occurs inside a loop (8248029).
llvm-svn: 109799
it's too late to start backing off aggressive latency scheduling when most
of the registers are in use so the threshold should be a bit tighter.
- Correctly handle live out's and extract_subreg etc.
- Enable register pressure aware scheduling by default for hybrid scheduler.
For ARM, this is almost always a win on # of instructions. It's runtime
neutral for most of the tests. But for some kernels with high register
pressure it can be a huge win. e.g. 464.h264ref reduced number of spills by
54 and sped up by 20%.
llvm-svn: 109279
it should set the jump table encloding the EK_Inline. This prevents
a second, unused, copy of the table from being emitted after the function
body. PR6581.
llvm-svn: 108730
it should set the jump table encloding the EK_Inline. This prevents
a second, unused, copy of the table from being emitted after the function
body. PR7499.
llvm-svn: 108722
-enable-no-nans-fp-math and -enable-no-infs-fp-math. All of the current codegen fp math optimizations only care whether the fp arithmetics arguments and results can never be NaN.
llvm-svn: 108465
instructions already have implicit defs of LR. The comment suggests that
this is intended to fix something like pr6111, but it doesn't really do
that either.
llvm-svn: 108186
correct alignment information, which simplifies ExpandRes_VAARG a bit.
The patch introduces a new alignment information to TargetLoweringInfo. This is
needed since the two natural candidates cannot be used:
* The 's' in target data: If this is set to the minimal alignment of any
argument, getCallFrameTypeAlignment would return 4 for doubles on ARM for
example.
* The getTransientStackAlignment method. It is possible for an architecture to
have argument less aligned than what we maintain the stack pointer.
llvm-svn: 108072
1. The arguments are f32.
2. The arguments are loads and they have no uses other than the comparison.
3. The comparison code is EQ or NE.
e.g.
vldr.32 s0, [r1]
vldr.32 s1, [r0]
vcmpe.f32 s1, s0
vmrs apsr_nzcv, fpscr
beq LBB0_2
=>
ldr r1, [r1]
ldr r0, [r0]
cmp r0, r1
beq LBB0_2
More complicated cases will be implemented in subsequent patches.
llvm-svn: 107852
Add explicit testcases for tail calls within the same module.
Duplicate some code to humor those who think .w doesn't apply on ARM.
Leave this disabled on Thumb1, and add some comments explaining why it's hard
and won't gain much.
llvm-svn: 107851
getFunctionAlignment and the corresponding use of that value in the ARM
asm printer, but now we're using the standard asm printer. The result of
this was that function alignments were dropped completely for Thumb functions.
Radar 8143571.
llvm-svn: 107435
for an "i" constraint should get lowered; PR 6309. While
this argument was passed around a lot, this is the only
place it was used, so it goes away from a lot of other
places.
llvm-svn: 106893
branch turns out to be ARM-to-Thumb or vice versa
the linker cannot resolve this. 8120438.
If this optimization is going to be useful we probably
need a compiler flag "assume callees are same architecture"
or something like that.
llvm-svn: 106662
basic tests.
This has been well tested on Darwin but not elsewhere.
It should work provided the linker correctly resolves
B.W <label in other function>
which it has not seen before, at least from llvm-based
compilers. I'm leaving the arm-tail-calls switch in
until I see if there's any problems because of that;
it might need to be disabled for some environments.
llvm-svn: 106299
call must not be callee-saved; following x86, add a new
regclass to represent this. Also fixes a couple of bugs.
Still disabled by default; Thumb doesn't work yet.
llvm-svn: 106053
immediate" operands. These functions have so far only been used for VMOV
but they also apply to other NEON instructions with modified immediate
operands. No functional changes.
llvm-svn: 105969
i64 and f64 types, but now it also handle Neon vector types, so the f64 result
of VMOVDRR may need to be converted to a Neon type. Radar 8084742.
llvm-svn: 105845
the machine instruction representation of the immediate value to be encoded
into an integer with similar fields as the actual VMOV instruction. This makes
things easier for the disassembler, since it can just stuff the bits into the
immediate operand, but harder for the asm printer since it has to decode the
value to be printed. Testcase for the encoding will follow later when MC has
more support for ARM.
llvm-svn: 105836
- change isShuffleMaskLegal to show that all shuffles with 32-bit and 64-bit
elements are legal
- the Neon shuffle instructions do not support 64-bit elements, but we were
not checking for that before lowering shuffles to use them
- remove some 64-bit element vduplane patterns that are no longer needed
llvm-svn: 105586
VECTOR_SHUFFLEs to REG_SEQUENCE instructions. The standard ISD::BUILD_VECTOR
node corresponds closely to REG_SEQUENCE but I couldn't use it here because
its operands do not get legalized. That is pretty awful, but I guess it
makes sense for other targets. Instead, I have added an ARM-specific version
of BUILD_VECTOR that will have its operands properly legalized.
This fixes the rest of Radar 7872877.
llvm-svn: 105439
A temporary flag -arm-tail-calls defaults to off,
so there is no functional change by default.
Intrepid users may try this; simple cases work
but there are bugs.
llvm-svn: 105413
copying VFP subregs. This exposed a bunch of dead code in the *spill-q.ll
tests, so I tweaked those tests to keep that code from being optimized away.
Radar 7872877.
llvm-svn: 104415
Move EmitTargetCodeForMemcpy, EmitTargetCodeForMemset, and
EmitTargetCodeForMemmove out of TargetLowering and into
SelectionDAGInfo to exercise this.
llvm-svn: 103481
const_casts, and it reinforces the design of the Target classes being
immutable.
SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.
And PIC16's AsmPrinter no longer uses TargetLowering.
llvm-svn: 101635
may be called when either the source or destination type is i64, and my
change also hadn't fixed the most obvious problem -- assuming that i64 will
only be bitconverted to f64, ignoring the various vector types.
Radar 7873160.
llvm-svn: 101615