It gets its own implementation totally divorced from the (presumably
performance-sensitive) routines which parse into a uint64_t.
Add APInt::operator|=(uint64_t), which is situationally much better than
using a full APInt.
llvm-svn: 97381
payloads. APFloat's internal folding routines always make QNaNs now,
instead of sometimes making QNaNs and sometimes SNaNs depending on the
type.
llvm-svn: 97364
node is always guaranteed to have a particular type
instead of hacking in ISD::STORE explicitly. This allows
us to use implied types for a broad range of nodes, even
target specific ones.
llvm-svn: 97355
Extracting the low element of a vector is now done with EXTRACT_SUBREG,
and the zero-extension performed by load movss is now modeled with
SUBREG_TO_REG, and so on.
Register-to-register movss and movsd are no longer considered copies;
they are two-address instructions which insert a scalar into a vector.
llvm-svn: 97354
defs or uses. The regular def and use checking below covers them, and
can be more precise. It's safe to hoist an instruction with a dead
implicit def if the register isn't live into the loop header.
llvm-svn: 97352
but codegen'd differently. This really wanted to use some
sort of subreg to get the low 4 bytes of the G8RC register
or something. However, it's invalid and nothing is testing
it, so I'm just zapping the bogosity.
llvm-svn: 97345
with getType() == MVT::i32 etc. Teach it that two different
integer constants are contradictory. This cuts 1K off the X86
table, down to 98k
llvm-svn: 97314
predicates. For example if we have:
Scope:
CheckType i32
ABC
CheckType f32
DEF
CheckType i32
GHI
Then we know that we can transform this into:
Scope:
CheckType i32
Scope
ABC
GHI
CheckType f32
DEF
This reorders the check for the 'GHI' predicate above
the check for the 'DEF' predidate. However it is safe to do this
in this situation because we know that a node cannot have both an
i32 and f32 type.
We're now doing more factoring that the old isel did.
llvm-svn: 97312
as deeply into the pattern as we can get away with. In pratice, this
means "all the way to to the emitter code, but not across
ComplexPatterns". This substantially increases the amount of factoring
we get.
llvm-svn: 97305
confusing the old MAT variable with the new GlobalType one. This caused
us to promote the @disp global pointer into:
@disp.body = internal global double*** undef
instead of:
@disp.body = internal global [3 x double**] undef
llvm-svn: 97285
for alignment into the LSDA. If the TType base offset is emitted, then put the
padding there. Otherwise, put it in the call site table length. There will be no
conflict between the two sites when placing the padding in one place.
llvm-svn: 97277
o Parallel addition and subtraction, signed/unsigned
o Miscellaneous operations: QADD, QDADD, QSUB, QDSUB
o Unsigned sum of absolute differences [and accumulate]: USAD8, USADA8
o Signed/Unsigned saturate: SSAT, SSAT16, USAT, USAT16
o Signed multiply accumulate long (halfwords): SMLAL<x><y>
o Signed multiply accumulate/subtract [long] (dual): SMLAD[x], SMLALD[X], SMLSD[X], SMLSLD[X]
o Signed dual multiply add/subtract [long]: SMUAD[X], SMUSD[X]
llvm-svn: 97276
This is possible because F8RC is a subclass of F4RC. We keep FMRSD around so
fextend has a pattern.
Also allow folding of memory operands on FMRSD.
llvm-svn: 97275