TableGen infers unmodeled side effects on instructions without a
pattern. Fix some instruction definitions where that was overlooked.
Also raise an error if a rematerializable instruction has unmodeled side
effects. That doen't make any sense.
llvm-svn: 141929
Make all of the RecTy constructors private, and use get() factory
methods instead. Return singleton instances when it makes sense.
ListTy instance pointers are stored in the element RecTy instance.
BitsRecTy instance pointers, one per length, are stored in a static vector.
Also unique DefInit instances. A Record has a unique DefInit which
has a unique RecordRecTy instance.
This saves some 200k-300k RecTy allocations when parsing ARM.td. It
reduces TableGen's heap usage by almost 50%.
llvm-svn: 135399
Manage Inits in a FoldingSet. This provides several benefits:
- Memory for Inits is properly managed
- Duplicate Inits are folded into Flyweights, saving memory
- It enforces const-correctness, protecting against certain classes
of bugs
The above benefits allow Inits to be used in more contexts, which in
turn provides more dynamism to TableGen. This enhanced capability
will be used by the AVX code generator to a fold common patterns
together.
llvm-svn: 134907
kind of predicate: one that is specific to imm nodes. The predicate function
specified here just checks an int64_t directly instead of messing around with
SDNode's. The virtue of this is that it means that fastisel and other things
can reason about these predicates.
llvm-svn: 129675
structure and fix some fixmes. We now have a TreePredicateFn class
that handles all of the decoding of these things. This is an internal
cleanup that has no impact on the code generated by tblgen.
llvm-svn: 129670
makes type checking for extract_subvector and insert_subvector more
robust and will allow stricter typechecking of more patterns in the
future.
This change handles int and fp as disjoint sets so that it will
enforce integer types to be smaller than the largest integer type and
fp types to be smaller than the largest fp type. There is no attempt
to check type sizes across the int/fp sets.
llvm-svn: 124672
This will be used to check patterns referencing a forthcoming
INSERT_SUBVECTOR SDNode. INSERT_SUBVECTOR in turn is very useful for
matching to VINSERTF128 instructions and complements the already
existing EXTRACT_SUBVECTOR SDNode.
llvm-svn: 124145
This is the beginning of purely symbolic subregister indices, but we need a bit
of jiggling before the explicit numeric indices can be completely removed.
llvm-svn: 104492
and those derived from them. These are obnoxious because
they were written as: PatLeaf<(bitconvert). Not having an
argument was foiling adding better type checking for operand
count matching up with what was required (in this case,
bitconvert always requires an operand!)
llvm-svn: 99759
transforming it into (add (i32 GPR), 4). This allows us to write type
generic multi patterns and have tblgen automatically drop the bitconvert
in the case when the types align. This allows us to fold an extra load
in the changed testcase.
llvm-svn: 99756
from two places in CodeGenDAGPatterns.cpp, and
use it in DAGISelMatcherGen.cpp instead of using
an incorrect predicate that happened to get lucky
on our current targets.
llvm-svn: 99726
results forward. We can now handle an instruction that
produces one implicit def and one result instead of one or
the other when not at the root of the pattern.
llvm-svn: 99725
to maintain a list of types (one for each result of
the node) instead of a single type. There are liberal
hacks added to emulate the old behavior in various
situations, but they can start disolving now.
llvm-svn: 98999
like this:
def : Pat<(add ...),
(FOOINST)>;
When fooinst only has a single implicit def (e.g. to R1). This will be handled
as if written as (set R1, (FOOINST ...))
llvm-svn: 98897
changing the primary datastructure from being a
"std::vector<unsigned char>" to being a new TypeSet class
that actually has (gasp) invariants!
This changes more things than I remember, but one major
innovation here is that it enforces that named input
values agree in type with their output values.
This also eliminates code that transparently assumes (in
some cases) that SDNodeXForm input/output types are the
same, because this is wrong in many case.
This also eliminates a bug which caused a lot of ambiguous
patterns to go undetected, where a register class would
sometimes pick the first possible type, causing an
ambiguous pattern to get arbitrary results.
With all the recent target changes, this causes no
functionality change!
llvm-svn: 98534
ordered correctly. Previously it would get in trouble when
two patterns were too similar and give them nondet ordering.
We force this by using the record ID order as a fallback.
The testsuite diff is due to alpha patterns being ordered
slightly differently, the change is a semantic noop afaict:
< lda $0,-100($16)
---
> subq $16,100,$0
llvm-svn: 97509
node is always guaranteed to have a particular type
instead of hacking in ISD::STORE explicitly. This allows
us to use implied types for a broad range of nodes, even
target specific ones.
llvm-svn: 97355
input/output patterns have the same type. It turns out that
this triggers all the time because we don't infer types
between these boundaries. Until we do, don't turn this on.
llvm-svn: 96905
inferencing. As far as I can tell, these are equivalent to the existing
MVT::fAny, iAny and vAny types, and having both of them makes it harder
to reason about and modify the type inferencing code.
The specific problem in PR4795 occurs when updating a vAny type to be fAny
or iAny, or vice versa. Both iAny and fAny include vector types -- they
intersect with the set of types represented by vAny. When merging them,
choose fAny/iAny to represent the intersection. This is not perfect, since
fAny/iAny also include scalar types, but it is good enough for TableGen's
type inferencing.
llvm-svn: 80423
- This manifested as non-determinism in the .inc output in rare cases (when two
distinct patterns ended up being equivalent, which is rather rare). That
meant the pattern matching was non-deterministic, which could eventually mean
the code generator selected different instructions based on the arch.
- It's probably worth making the DAGISel ensure a total ordering (or force the
user to), but the simple fix here is to totally order the Record* maps based
on a unique ID.
- PR4672, PR4711.
Yay:
--
ddunbar@giles:~$ cat ~/llvm.obj.64/lib/Target/*/*.inc | shasum
d1099ff34b21459a5a3e7021c225c080e6017ece -
ddunbar@giles:~$ cat ~/llvm.obj.ppc/lib/Target/*/*.inc | shasum
d1099ff34b21459a5a3e7021c225c080e6017ece -
--
llvm-svn: 79846
There have been a few times where I've wanted this but ended up leaving the
operand type unconstrained. It is easy to add this now and should help
catch errors in the future.
llvm-svn: 78849
other operators. For the rare cases where a list type cannot be
deduced, provide a []<type> syntax, where <type> is the list element
type.
llvm-svn: 73078
ADDC/ADDE use MVT::i1 (later, whatever it gets legalized to)
instead of MVT::Flag. Remove CARRY_FALSE in favor of 0; adjust
all target-independent code to use this format.
Most targets will still produce a Flag-setting target-dependent
version when selection is done. X86 is converted to use i32
instead, which means TableGen needs to produce different code
in xxxGenDAGISel.inc. This keys off the new supportsHasI1 bit
in xxxInstrInfo, currently set only for X86; in principle this
is temporary and should go away when all other targets have
been converted. All relevant X86 instruction patterns are
modified to represent setting and using EFLAGS explicitly. The
same can be done on other targets.
The immediate behavior change is that an ADC/ADD pair are no
longer tightly coupled in the X86 scheduler; they can be
separated by instructions that don't clobber the flags (MOV).
I will soon add some peephole optimizations based on using
other instructions that set the flags to feed into ADC.
llvm-svn: 72707
PR2957
ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask. A value of -1 represents UNDEF.
In addition to eliminating the creation of illegal BUILD_VECTORS just to
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.
llvm-svn: 70225
ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask. A value of -1 represents UNDEF.
In addition to eliminating the creation of illegal BUILD_VECTORS just to
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.
A clean up of x86 shuffle code, and some canonicalizing in DAGCombiner is next.
llvm-svn: 69952
This will be used to replace things like X86's MOV32to32_.
Enhance ScheduleDAGSDNodesEmit to be more flexible and robust
in the presense of subregister superclasses and subclasses. It
can now cope with the definition of a virtual register being in
a subclass of a use.
Re-introduce the code for recording register superreg classes and
subreg classes. This is needed because when subreg extracts and
inserts get coalesced away, the virtual registers are left in
the correct subclass.
llvm-svn: 68961
in selectiondag patterns. This is required for the upcoming shuffle_vector rewrite,
and as it turns out, cleans up a hack in the Alpha instruction info.
llvm-svn: 67286
errors when thrown. This gets us nice errors like this from tblgen:
CMOVL32rr: (set GR32:i32:$dst, (X86cmov GR32:$src1, GR32:$src2))
/Users/sabre/llvm/Debug/bin/tblgen: error:
Included from X86.td:116:
Parsing X86InstrInfo.td:922: In CMOVL32rr: X86cmov node requires exactly 4 operands!
def CMOVL32rr : I<0x4C, MRMSrcReg, // if <s, GR32 = GR32
^
instead of just:
CMOVL32rr: (set GR32:i32:$dst, (X86cmov GR32:$src1, GR32:$src2))
/Users/sabre/llvm/Debug/bin/tblgen: In CMOVL32rr: X86cmov node requires exactly 4 operands!
This is all I plan to do with this, but it should be easy enough to improve if anyone
cares (e.g. keeping more loc info in "dag" expr records in tblgen.
llvm-svn: 66898
target directories themselves. This also means that VMCore no longer
needs to know about every target's list of intrinsics. Future work
will include converting the PowerPC target to this interface as an
example implementation.
llvm-svn: 63765
crashes or wrong code with codegen of large integers:
eliminate the legacy getIntegerVTBitMask and
getIntegerVTSignBit methods, which returned their
value as a uint64_t, so couldn't handle huge types.
llvm-svn: 63494
foldMemoryOperand how to "fold" them, by converting them into constant-pool
loads. When they aren't folded, they use xorps/cmpeqd, but for example when
register pressure is high, they may now be folded as memory operands, which
reduces register pressure.
Also, mark V_SET0 isAsCheapAsAMove so that two-address-elimination will
remat it instead of copying zeros around (V_SETALLONES was already marked).
llvm-svn: 60461
is set but mayLoad is not set. Fix all the problems this turned up.
Change code to not use isSimpleLoad instead of mayLoad unless it
really wants isSimpleLoad.
llvm-svn: 60459
"parameter" types. An intrinsic can now return a multiple return values like
this:
def add_with_overflow : Intrinsic<[llvm_i32_ty, llvm_i1_ty],
[LLVMMatchType<0>, LLVMMatchType<0>]>;
llvm-svn: 59237
This will allow predicates to be composed, which will allow the
predicate definitions to become less redundant, and eventually
will allow DAGISelEmitter.cpp to emit less redundant code.
llvm-svn: 57562
and use it in FastISelEmitter.cpp, and make FastISel
subtarget aware. Among other things, this lets it work
properly on x86 targets that don't have SSE, where it
successfully selects x87 instructions.
llvm-svn: 55156
to different address spaces. This alters the naming scheme for those
intrinsics, e.g., atomic.load.add.i32 => atomic.load.add.i32.p0i32
llvm-svn: 54195
and fix the bug that it uncovers: inlining a pattern fragment could bring
in other pattern fragments if the inlinee hadn't already been inlined.
llvm-svn: 52888
Added abstract class MemSDNode for any Node that have an associated MemOperand
Changed atomic.lcs => atomic.cmp.swap, atomic.las => atomic.load.add, and
atomic.lss => atomic.load.sub
llvm-svn: 52706
and better control the abstraction. Rename the type
to MVT. To update out-of-tree patches, the main
thing to do is to rename MVT::ValueType to MVT, and
rewrite expressions like MVT::getSizeInBits(VT) in
the form VT.getSizeInBits(). Use VT.getSimpleVT()
to extract a MVT::SimpleValueType for use in switch
statements (you will get an assert failure if VT is
an extended value type - these shouldn't exist after
type legalization).
This results in a small speedup of codegen and no
new testsuite failures (x86-64 linux).
llvm-svn: 52044
index for the input pattern in terms of the output pattern. Instead
keep track of how many fixed operands the input pattern actually
has, and have the input matching code pass the output-emitting
function that index value. This simplifies the code, disentangles
variables_ops from the support for predication operations, and
makes variable_ops more robust.
llvm-svn: 51808
definitions. This adds a new construct, "discard", for indicating
that a named node in the input matching pattern is to be discarded,
instead of corresponding to a node in the output pattern. This
allows tblgen to know where the arguments for the varaible_ops are
supposed to begin.
This fixes "rdar://5791600", whatever that is ;-).
llvm-svn: 51699
CodeGenDAGPatterns, where it can be used in other tablegen backends.
This allows the inference to be done for DAGISelEmitter so that it
gets accurate mayLoad/mayStore/isSimpleLoad flags.
This brings MemOperand functionality back to where it was before
48329. However, it doesn't solve the problem of anonymous patterns
which expand to code that does loads or stores.
llvm-svn: 49123
were being pruned in patterns where a variable was used more than once, e.g.:
(or (and R32C:$rA, R32C:$rC), (and R32C:$rB, (not R32C:$rC)))
In this example, $rC is used more than once and is actually significant to
instruction selection pattern matching when commuted variants are produced.
This patch scans the pattern's clauses and collects the variables, creating
a set of variables that are used more than once. TreePatternNode::isIsomorphicTo()
also understands that multiply-used variables are significant.
llvm-svn: 47950
tblgen will complain if a sign-extended constant does not fit into a
data type smaller than i32, e.g., i16. This causes a problem when certain
hex constants are used, such as 0xff for byte masks or immediate xor
values.
tblgen will try the sign-extended value first and, if the sign extended
value would overflow, it tries to see if the unsigned value will fit.
Consequently, a software developer can now safely incant:
(XORHIr16 R16C:$rA, 0xffff)
which is somewhat clearer and more informative than incanting:
(XORHIr16 R16C:$rA, (i16 -1))
even if the two are bitwise equivalent.
Tblgen also outputs the 64-bit unsigned constant in the generated ISel code
when getTargetConstant() is invoked.
llvm-svn: 47188
instead of "ISD::STORE". This allows us to mark target-specific dag
nodes as storing (such as ppc byteswap stores). This allows us to remove
more explicit isStore flags from the .td files.
Finally, add a warning for when a .td file contains an explicit
isStore and tblgen is able to infer it.
llvm-svn: 45654