In file included from X86InstrInfo.cpp:16:
X86GenInstrInfo.inc:2789: error: integer constant is too large for 'long' type
X86GenInstrInfo.inc:2790: error: integer constant is too large for 'long' type
X86GenInstrInfo.inc:2792: error: integer constant is too large for 'long' type
X86GenInstrInfo.inc:2793: error: integer constant is too large for 'long' type
X86GenInstrInfo.inc:2808: error: integer constant is too large for 'long' type
X86GenInstrInfo.inc:2809: error: integer constant is too large for 'long' type
X86GenInstrInfo.inc:2816: error: integer constant is too large for 'long' type
X86GenInstrInfo.inc:2817: error: integer constant is too large for 'long' type
llvm-svn: 105524
A temporary flag -arm-tail-calls defaults to off,
so there is no functional change by default.
Intrepid users may try this; simple cases work
but there are bugs.
llvm-svn: 105413
OSX users: make sure that CrashReporter is disabled when running unit tests.
Death tests are enabled now so you'll get a ton of message boxes.
llvm-svn: 105352
The StmtNodes generator has been generalized to allow for the
creation of DeclNodes tables as well, and another emitter was
added for DeclContexts.
llvm-svn: 105164
of the intrinsics. The goal is to auto-generate both support for GCC-style (vector)
and ARM-style (struct of vector) intrinsics.
This is work in progress, but will be completed soon.
llvm-svn: 104910
A Register with subregisters must also provide SubRegIndices for adressing the
subregisters. TableGen automatically inherits indices for sub-subregisters to
minimize typing.
CompositeIndices may be specified for the weirder cases such as the XMM sub_sd
index that returns the same register, and ARM NEON Q registers where both D
subregs have ssub_0 and ssub_1 sub-subregs.
It is now required that all subregisters are named by an index, and a future
patch will also require inherited subregisters to be named. This is necessary to
allow composite subregister indices to be reduced to a single index.
llvm-svn: 104704
A Register with subregisters must also provide SubRegIndices for adressing the
subregisters. TableGen automatically inherits indices for sub-subregisters to
minimize typing.
CompositeIndices may be specified for the weirder cases such as the XMM sub_sd
index that returns the same register, and ARM NEON Q registers where both D
subregs have ssub_0 and ssub_1 sub-subregs.
It is now required that all subregisters are named by an index, and a future
patch will also require inherited subregisters to be named. This is necessary to
allow composite subregister indices to be reduced to a single index.
llvm-svn: 104654
This passes lit tests, but I'll give it a go through the buildbots to smoke out
any remaining places that depend on the old SubRegIndex numbering.
Then I'll remove NumberHack entirely.
llvm-svn: 104615
structure that represents a mapping without any dependencies on SubRegIndex
numbering.
This brings us closer to being able to remove the explicit SubRegIndex
numbering, and it is now possible to specify any mapping without inventing
*_INVALID register classes.
llvm-svn: 104563
This is the beginning of purely symbolic subregister indices, but we need a bit
of jiggling before the explicit numeric indices can be completely removed.
llvm-svn: 104492
and %rcr_, leaving just %cr_ which is what people expect.
Updated the disassembler to support this unified register set.
Added a testcase to verify that the registers continue to be
decoded correctly.
llvm-svn: 103196
and diagnostic groups. This allows the compiler to group
diagnostics together (e.g. "Logic Warning",
"Format String Warning", etc) like the static analyzer does.
This is not exposed through anything in the compiler yet.
llvm-svn: 103050
sub-register indices and outputs a single super register which is formed from
a consecutive sequence of registers.
This is used as register allocation / coalescing aid and it is useful to
represent instructions that output register pairs / quads. For example,
v1024, v1025 = vload <address>
where v1024 and v1025 forms a register pair.
This really should be modelled as
v1024<3>, v1025<4> = vload <address>
but it would violate SSA property before register allocation is done.
Currently we use insert_subreg to form the super register:
v1026 = implicit_def
v1027 - insert_subreg v1026, v1024, 3
v1028 = insert_subreg v1027, v1025, 4
...
= use v1024
= use v1028
But this adds pseudo live interval overlap between v1024 and v1025.
We can now modeled it as
v1024, v1025 = vload <address>
v1026 = REG_SEQUENCE v1024, 3, v1025, 4
...
= use v1024
= use v1026
After coalescing, it will be
v1026<3>, v1025<4> = vload <address>
...
= use v1026<3>
= use v1026
llvm-svn: 102815
code. It used to #include the enhanced disassembly
information for the targets it supported straight
out of lib/Target/{X86,ARM,...} but now it uses a
new interface provided by MCDisassembler, and (so
far) implemented by X86 and ARM.
Also removed hacky #define-controlled initialization
of targets in edis. If clients only want edis to
initialize a limited set of targets, they can set
--enable-targets on the configure command line.
llvm-svn: 101179
We are bound to fail! For proper disassembly, the well-known encoding bits
of the instruction must be fully specified.
This also removes pseudo instructions from considerations of disassembly,
which is a better design and less fragile than the name matchings.
llvm-svn: 100899
such that the non-VFP versions have no implicit defs of VFP registers.
If any callee-saved VFP registers are marked as having been defined, the
prologue/epilogue code will try to save and restore them.
Radar 7770432.
llvm-svn: 100892
I also added a rule to the ARM target's Makefile to
build the ARM-specific instruction information table
for the enhanced disassembler.
I will add the test harness for all this stuff in
a separate commit.
llvm-svn: 100735
argument that had to be between 0 and 7 to have any value,
firing an assert later in the AsmPrinter. Now, the
disassembler rejects instructions with out-of-range values
for that immediate.
llvm-svn: 100694
When a target instruction wants to set target-specific flags, it should simply
set bits in the TSFlags bit vector defined in the Instruction TableGen class.
This works well because TableGen resolves member references late:
class I : Instruction {
AddrMode AM = AddrModeNone;
let TSFlags{3-0} = AM.Value;
}
let AM = AddrMode4 in
def ADD : I;
TSFlags gets the expected bits from AddrMode4 in this example.
llvm-svn: 100384
backend (ARMDecoderEmitter) which emits the decoder functions for ARM and Thumb,
and the disassembler core which invokes the decoder function and builds up the
MCInst based on the decoded Opcode.
Reviewed by Chris Latter and Bob Wilson.
llvm-svn: 100233
doesn't need to be stable because the patterns are fully ordered.
Add a first level sort predicate that orders patterns in this
order: 1) scalar integer operations 2) scalar floating point
3) vector int 4) vector float. This is a trivial sort on their
top level pattern type so it is nice and transitive. The
benefit of doing this is that simple integer operations are
much more common than insane vector things and isel was trying
to match the big complex vector patterns before the simple
ones because the complexity of the vector operations was much
higher. Since they can't both match, it is best (for compile
time) to try the simple integer ones first.
This cuts down the # failed match attempts on real code by
quite a bit, for example, this reduces backtracks on crafty
(as a random example) from 228285 -> 188369.
llvm-svn: 99797
patterns within the generated matcher. This works great except
that the sort fails because the relation defined isn't
transitive. I have a much simpler solution coming next, but want
to archive the code.
llvm-svn: 99795
and those derived from them. These are obnoxious because
they were written as: PatLeaf<(bitconvert). Not having an
argument was foiling adding better type checking for operand
count matching up with what was required (in this case,
bitconvert always requires an operand!)
llvm-svn: 99759
transforming it into (add (i32 GPR), 4). This allows us to write type
generic multi patterns and have tblgen automatically drop the bitconvert
in the case when the types align. This allows us to fold an extra load
in the changed testcase.
llvm-svn: 99756
issues to get here. We now trim the result type list of the
CompleteMatch or MorphNodeTo operation to be the same size as the
thing we're matching. this means that if you match (add GPR, GPR)
with an instruction that produces a normal result and a flag that
we now trim the result in tblgen instead of having to do it
dynamically. This exposed a bunch of inconsistencies in result
counting that happened to be getting lucky since the days of the
old isel.
llvm-svn: 99728
from two places in CodeGenDAGPatterns.cpp, and
use it in DAGISelMatcherGen.cpp instead of using
an incorrect predicate that happened to get lucky
on our current targets.
llvm-svn: 99726
results forward. We can now handle an instruction that
produces one implicit def and one result instead of one or
the other when not at the root of the pattern.
llvm-svn: 99725
bytes instead of one byte. This is important because
we're running up to too many opcodes to fit in a byte
and it is aggrevated by FIRST_TARGET_MEMORY_OPCODE
making the numbering sparse. This just bites the
bullet and bloats out the table. In practice, this
increases the size of the x86 isel table from 74.5K
to 76K. I think we'll cope :)
This fixes rdar://7791648
llvm-svn: 99494
If a TableGen class has an initializer expression containing an X.Y subexpression,
AND X depends on template parameters,
AND those template parameters have defaults,
AND some parameters with defaults are beyond position 1,
THEN parts of the initializer expression are evaluated prematurely with the default values when the first explicit template parameter is substituted, before the remaining explicit template parameters have been substituted.
llvm-svn: 99492
of runs without leak checking. We add -vg to the triple for non-checked runs,
or -vg_leak for checked runs. Also use this to XFAIL the TableGen tests, since
tablegen leaks like a sieve. This includes some valgrindArgs refactoring.
llvm-svn: 99103
to maintain a list of types (one for each result of
the node) instead of a single type. There are liberal
hacks added to emulate the old behavior in various
situations, but they can start disolving now.
llvm-svn: 98999
Python 2.4 always hits this bug: http://bugs.python.org/issue1731717
when running check-lit on multi-core systems.
Setting numThreads to 1 makes it slower, but at least the results reported are
correct.
llvm-svn: 98969
record* -> instrinfo instead of std::string -> instrinfo.
This speeds up tblgen on cellcpu from 7.28 -> 5.98s with a debug
build (20%).
llvm-svn: 98916
like this:
def : Pat<(add ...),
(FOOINST)>;
When fooinst only has a single implicit def (e.g. to R1). This will be handled
as if written as (set R1, (FOOINST ...))
llvm-svn: 98897
under valgrind:
==19577== Invalid free() / delete / delete[]
==19577== at 0x4C9C866: free (vg_replace_malloc.c:325)
==19577== by 0x5121104: ??? (in /lib/libc-2.10.2.so)
==19577== by 0x4C97412: _vgnU_freeres (vg_preloaded.c:62)
==19577== by 0x5041486: __run_exit_handlers (exit.c:93)
==19577== by 0x50414FE: exit (exit.c:100)
==19577== by 0x5028B5C: (below main) (libc-start.c:254)
==19577== Address 0xffffffff is not stack'd, malloc'd or (recently) free'd
==19577==
Apparently this happens under certain versions of glibc, so valgrind provides
the --run-libc-freeres=no option to avoid calling freeres(). This may increase
the number of "still reachable" blocks valgrind reports, but we don't care
about those, while this error breaks the buildbots.
There are upstream bugs about this at
http://sourceware.org/bugzilla/show_bug.cgi?id=10610 and
http://bugs.kde.org/show_bug.cgi?id=167483, but they don't look likely to be
fixed.
llvm-svn: 98813
U test/CodeGen/ARM/tls2.ll
U test/CodeGen/ARM/arm-negative-stride.ll
U test/CodeGen/ARM/2009-10-30.ll
U test/CodeGen/ARM/globals.ll
U test/CodeGen/ARM/str_pre-2.ll
U test/CodeGen/ARM/ldrd.ll
U test/CodeGen/ARM/2009-10-27-double-align.ll
U test/CodeGen/Thumb2/thumb2-strb.ll
U test/CodeGen/Thumb2/ldr-str-imm12.ll
U test/CodeGen/Thumb2/thumb2-strh.ll
U test/CodeGen/Thumb2/thumb2-ldr.ll
U test/CodeGen/Thumb2/thumb2-str_pre.ll
U test/CodeGen/Thumb2/thumb2-str.ll
U test/CodeGen/Thumb2/thumb2-ldrh.ll
U utils/TableGen/TableGen.cpp
U utils/TableGen/DisassemblerEmitter.cpp
D utils/TableGen/RISCDisassemblerEmitter.h
D utils/TableGen/RISCDisassemblerEmitter.cpp
U Makefile.rules
U lib/Target/ARM/ARMInstrNEON.td
U lib/Target/ARM/Makefile
U lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp
U lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
U lib/Target/ARM/AsmPrinter/ARMInstPrinter.h
D lib/Target/ARM/Disassembler
U lib/Target/ARM/ARMInstrFormats.td
U lib/Target/ARM/ARMAddressingModes.h
U lib/Target/ARM/Thumb2ITBlockPass.cpp
llvm-svn: 98640
(RISCDisassemblerEmitter) which emits the decoder functions for ARM and Thumb,
and the disassembler core which invokes the decoder function and builds up the
MCInst based on the decoded Opcode.
Added sub-formats to the NeonI/NeonXI instructions to further refine the NEONFrm
instructions to help disassembly.
We also changed the output of the addressing modes to omit the '+' from the
assembler syntax #+/-<imm> or +/-<Rm>. See, for example, A8.6.57/58/60.
And modified test cases to not expect '+' in +reg or #+num. For example,
; CHECK: ldr.w r9, [r7, #28]
llvm-svn: 98637
changing the primary datastructure from being a
"std::vector<unsigned char>" to being a new TypeSet class
that actually has (gasp) invariants!
This changes more things than I remember, but one major
innovation here is that it enforces that named input
values agree in type with their output values.
This also eliminates code that transparently assumes (in
some cases) that SDNodeXForm input/output types are the
same, because this is wrong in many case.
This also eliminates a bug which caused a lot of ambiguous
patterns to go undetected, where a register class would
sometimes pick the first possible type, causing an
ambiguous pattern to get arbitrary results.
With all the recent target changes, this causes no
functionality change!
llvm-svn: 98534
Now it will factor things like this:
CheckType i32
...
CheckOpcode ISD::AND
CheckType i64
...
into:
SwitchType:
i32: ...
i64:
CheckOpcode ISD::AND
...
This shrinks hte table by a few bytes, nothing spectacular.
llvm-svn: 97908
IF(condition(value)):
If the value satisfies the condition, the line is processed by lit; otherwise
it is skipped. A test with no unignored directives is resolved as Unsupported.
The test suite is responsible for defining conditions; conditions are unary
functions over strings. I've defined two conditions in the LLVM test suite,
TARGET (with values like those in TARGETS_TO_BUILD) and BINDING (with values
like those in llvm_bindings). So for example you can write:
IF(BINDING(ocaml)): RUN: %blah %s -o -
and the RUN line will only execute if LLVM was configured with the ocaml
bindings.
llvm-svn: 97726
sequence, just emit instruction predicates right before them. This
exposes yet more factoring opportunitites, shrinking the X86 table
to 79144 bytes.
llvm-svn: 97704
as the very last thing before node emission. This should
dramatically reduce the number of times we do 'MatchAddress'
on X86, speeding up compile time. This also improves comments
in the tables and shrinks the table a bit, now down to
80506 bytes for x86.
llvm-svn: 97703
SwitchOpcodeMatcher) and have DAGISelMatcherOpt form it. This
speeds up selection, particularly for X86 which has lots of
variants of instructions with only type differences.
llvm-svn: 97645
stuff now that we don't care about emulating the old broken
behavior of the old isel. This eliminates the
'CheckChainCompatible' check (along with IsChainCompatible) which
did an incorrect and inefficient scan *up* the chain nodes which
happened as the pattern was being formed and does the validation
at the end in HandleMergeInputChains when it forms a structural
pattern. This scans "down" the graph, which means that it is
quickly bounded by nodes already selected. This also handles
token factors that get "trapped" in the dag.
Removing the CheckChainCompatible nodes also shrinks the
generated tables by about 6K for X86 (down to 83K).
There are two pieces remaining before I can nuke PreprocessRMW:
1. I xfailed a test because we're now producing worse code in a
case that has nothing to do with the change: it turns out that
our use of MorphNodeTo will leave dead nodes in the graph
which (depending on how the graph is walked) end up causing
bogus uses of chains and blocking matches. This is really
bad for other reasons, so I'll fix this in a follow-up patch.
2. CheckFoldableChainNode needs to be improved to handle the TF.
llvm-svn: 97539
EmitMergeInputChainsMatcher node up into EmitResultCode. This
doesn't have much of an effect on the generated code, the X86
table is exactly the same size.
llvm-svn: 97514
ordered correctly. Previously it would get in trouble when
two patterns were too similar and give them nondet ordering.
We force this by using the record ID order as a fallback.
The testsuite diff is due to alpha patterns being ordered
slightly differently, the change is a semantic noop afaict:
< lda $0,-100($16)
---
> subq $16,100,$0
llvm-svn: 97509
This allows formation of OpcodeSwitch for top level patterns, in
particular on X86. This saves about 1K of data space in the x86
table and makes the dispatch much more efficient.
llvm-svn: 97440
ComplexPattern at the root be generated multiple times, once
for each opcode they are part of. This encourages factoring
because the opcode checks get treated just like everything
else in the matcher.
llvm-svn: 97439
to a scope where every child starts with a CheckOpcode, but
executes more efficiently. Enhance DAGISelMatcherOpt to
form it.
This also fixes a bug in CheckOpcode: apparently the SDNodeInfo
objects are not pointer comparable, we have to compare the
enum name.
llvm-svn: 97438
so that we get grouping at the top level.
Add an optimization to reorder type check & record nodes
after opcode checks. We prefer to expose tree shape
matching which improves grouping and will enhance the next
optimization.
llvm-svn: 97432
dispatcher method. This eliminates the dependence of the new isel's
generated code on the old isel's predicates, however some random
hand written isel code still uses them.
llvm-svn: 97431