We were completely ignoring the unorder/ordered attributes of condition
codes and also incorrectly lowering seto and setuo.
Reviewed-by: Vincent Lejeune<vljn at ovi.com>
llvm-svn: 191603
SelectionDAG will now attempt to inverse an illegal conditon in order to
find a legal one and if that doesn't work, it will attempt to swap the
operands using the inverted condition.
There are no new test cases for this, but a nubmer of the existing R600
tests hit this path.
llvm-svn: 191602
This is useful for targets like R600, which only support GT, GE, NE, and EQ
condition codes as it removes the need to handle unsupported condition
codes in target specific code.
There are no tests with this commit, but R600 has been updated to take
advantage of this new feature, so its existing selectcc tests are now
testing the swapped operands path.
llvm-svn: 191601
Interpreting the results of this function is not very intuitive, so I
cleaned it up to make it more clear whether or not a SETCC op was
legalized and how it was legalized (either by swapping LHS and RHS or
replacing with AND/OR).
This patch does change functionality in the LHS and RHS swapping case,
but unfortunately there are no in-tree tests for this. However, this
patch is a prerequisite for R600 to take advantage of the LHS and RHS
swapping, so tests will be added in subsequent commits.
llvm-svn: 191600
of loops.
Previously, two consecutive calls to function "func" would result in the
following sequence of instructions:
1. load $16, %got(func)($gp) // load address of lazy-binding stub.
2. move $25, $16
3. jalr $25 // jump to lazy-binding stub.
4. nop
5. move $25, $16
6. jalr $25 // jump to lazy-binding stub again.
With this patch, the second call directly jumps to func's address, bypassing
the lazy-binding resolution routine:
1. load $25, %got(func)($gp) // load address of lazy-binding stub.
2. jalr $25 // jump to lazy-binding stub.
3. nop
4. load $25, %got(func)($gp) // load resolved address of func.
5. jalr $25 // directly jump to func.
llvm-svn: 191591
No functionality change. Future patches will add analysis which will be used
in other passes (PEI, StackSlot). The end goal is to support ssp-strong stack
layout rules.
WIP.
Differential Revision: http://llvm-reviews.chandlerc.com/D1521
llvm-svn: 191570
Currently foldSelectICmpAndOr asserts if the "or" involves a vector
containing several of the same power of two. We can easily avoid this by
only performing the fold on integer types, like foldSelectICmpAnd does.
Fixes <rdar://problem/15012516>
llvm-svn: 191552
Remove the command line argument "struct-path-tbaa" since we should not depend
on command line argument to decide which format the IR file is using. Instead,
we check the first operand of the tbaa tag node, if it is a MDNode, we treat
it as struct-path aware TBAA format, otherwise, we treat it as scalar TBAA
format.
When clang starts to use struct-path aware TBAA format no matter whether
struct-path-tbaa is no, and we can auto-upgrade existing bc files, the support
for scalar TBAA format can be dropped.
Existing testing cases are updated to use the struct-path aware TBAA format.
llvm-svn: 191538
We were previously using getFirstInsertionPt to insert PHI
instructions when vectorizing, but getFirstInsertionPt also skips past
landingpads, causing this to generate invalid IR.
We can avoid this issue by using getFirstNonPHI instead.
llvm-svn: 191526
The backend tries to use block operations like MVC, NC, OC and XC for
simple scalar operations. For correctness reasons, it rejects any case
in which the regions might partially overlap. However, for performance
reasons, it should also reject cases where the regions might be equal,
since the instruction might then not use the fast path.
This fixes a performance regression seen in bzip2. We may want to limit
the optimisation even more in future, or even remove it entirely, but I'll
try with this for now.
llvm-svn: 191525
The backend previously folded offsets into PC-relative addresses
whereever possible. That's the right thing to do when the address
can be used directly in a PC-relative memory reference (using things
like LRL). But if we have a register-based memory reference and need
to load the PC-relative address separately, it's better to use an anchor
point that could be shared with other accesses to the same area of the
variable.
Fixes a FIXME.
llvm-svn: 191524
As specified in A8.8.72/A8.8.73/A8.8.74 in the ARM ARM, all variants of the ARM LDRD instruction have the following two constraints:
LDRD<c> <Rt>, <Rt2>, ...
(a) Rt must be even-numbered and not r14
(b) Rt2 must be R(t+1)
If those two constraints are not met the result of executing the instruction will be unpredictable.
Constraint (b) was already enforced, this commit adds support for constraint (a).
Fixes rdar://14479793.
llvm-svn: 191520
For v4f32 and v2f64, EXTRACT_VECTOR_ELT is matched by a pseudo-insn which may
be expanded to subregister copies and/or instructions as appropriate.
llvm-svn: 191514
This change fixes the problem reported in pr17380 and re-add the dagcombine
transformation ensuring that the value types are always legal if the
transformation is triggered after Legalization took place.
Added the test case from pr17380.
llvm-svn: 191509
This file contains notes about the instruction selection for MSA. For example,
it notes that ilvl.d is cannot be selected because ilvev.d covers the same
cases and is selected instead of ilvl.d.
llvm-svn: 191507
LDRD<c> <Rt>, <Rt2>, <label>
LDRD<c> <Rt>, <Rt2>, [<Rn>{, #+/-<imm>}]
LDRD<c> <Rt>, <Rt2>, [<Rn>], #+/-<imm>
LDRD<c> <Rt>, <Rt2>, [<Rn>, #+/-<imm>]!
As specified in A8.8.72/A8.8.73 in the ARM ARM, the T1 encoding has a constraint which enforces that Rt != Rt2.
If this constraint is not met the result of executing the instruction will be unpredictable.
Fixes rdar://14479780.
llvm-svn: 191504
lowerMSABinaryIntr, lowerMSABinaryImmIntr, lowerMSABranchIntr,
and lowerMSAUnaryIntr were trivially small functions. Inlined them into
their callers.
lowerMSASplat now takes its callers SDLoc instead of making a new one.
No functional change.
llvm-svn: 191503