This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.
This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.
This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.
The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).
Differential Revision: https://reviews.llvm.org/D66936
llvm-svn: 371198
This was only using the correct register constraints if this was the
final result instruction. If the extract was a sub instruction of the
result, it would attempt to use GIR_ConstrainSelectedInstOperands on a
COPY, which won't work. Move the handling to
createAndImportSubInstructionRenderer so it works correctly.
I don't fully understand why runOnPattern and
createAndImportSubInstructionRenderer both need to handle these
special cases, and constrain them with slightly different methods. If
I remove the runOnPattern handling, it does break the constraint when
the final result instruction is EXTRACT_SUBREG.
llvm-svn: 371150
This partially adds support for patterns with REG_SEQUENCE. The source
patterns are now accepted, but the pattern is still rejected due to
missing support for the instruction renderer.
llvm-svn: 370920
The Hexagon itineraries are cunningly crafted such that functional units between
itineraries do not clash. Because all itineraries are bundled into the same DFA,
a functional unit index clash would cause an incorrect DFA to be generated.
A workaround for this is to ensure all itineraries declare the universe of all
possible functional units, but this isn't ideal for three reasons:
1) We only have a limited number of FUs we can encode in the packetizer, and
using the universe causes us to hit the limit without care.
2) Silent codegen faults are bad, and careful triage of the FU list shouldn't
be required.
3) Smooshing all itineraries into the same automaton allows combinations of
instruction classes that cannot exist, which bloats the table.
A simple solution is to allow "namespacing" packetizers.
Differential Revision: https://reviews.llvm.org/D66940
llvm-svn: 370508
This is a special case because one node maps to two different G_
instructions, and the operand order is changed.
This mostly enables G_FCMP for AMDPGPU. G_ICMP is still manually
selected for now since it has the SALU and VALU complication to deal
with.
llvm-svn: 370280
Reuse the logic for INSERT_SUBREG to also import SUBREG_TO_REG patterns.
- Split `inferSuperRegisterClass` into two functions, one which tries to use
an existing TreePatternNode (`inferSuperRegisterClassForNode`), and one that
doesn't. SUBREG_TO_REG doesn't have a node to leverage, which is the cause
for the split.
- Rename GlobalISelEmitterInsertSubreg.td to GlobalISelEmitterSubreg.td and
update it.
- Update impacted tests in the AArch64 and X86 backends.
This is kind of a hit/miss for code size improvements/regressions. E.g. in
add-ext.ll, we now get some identity copies. This isn't really anything the
importer can handle, since it's caused by a later pass introducing the copy for
the sake of correctness.
Differential Revision: https://reviews.llvm.org/D66769
llvm-svn: 370254
I thought `llvm::sort` was stable for some reason but it's not.
Use `llvm::stable_sort` in `CodeGenTarget::getSuperRegForSubReg`.
Original patch: https://reviews.llvm.org/D66498
llvm-svn: 370084
When EXPENSIVE_CHECKS are enabled, GlobalISelEmitterSubreg.td doesn't get
stable output.
Reverting while I debug it.
See: https://reviews.llvm.org/D66498
llvm-svn: 370080
Summary:
This patch adds support for scalable vectors in intrinsics, enabling
intrinsics such as the following to be defined:
declare <vscale x 4 x i32> @llvm.something.nxv4i32(<vscale x 4 x i32>)
Support for this is implemented by defining a new type descriptor for
scalable vectors and adding mangling support for scalable vector types
in the name mangling scheme used by 'any' types in intrinsic signatures.
Tests have been added for IRBuilder to test scalable vectors work as
expected when using intrinsics through this interface. This required
implementing an intrinsic that is explicitly defined with scalable
vectors, e.g. LLVMType<nxv4i32>, an SVE floating-point convert
intrinsic was used for this. The behaviour of the overloaded type
LLVMScalarOrSameVectorWidth with scalable vectors is tested using the
existing masked load intrinsic. Also added an .ll test to test the
Verifier catches a bad intrinsic argument when passing a fixed-width
predicate (mask) to the masked.load intrinsic where a scalable is
expected.
Patch by Paul Walker
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D65930
llvm-svn: 370053
This teaches the importer to handle INSERT_SUBREG instructions.
We were missing patterns involving INSERT_SUBREG in AArch64. It appears in
AArch64InstrInfo.td 107 times, and 14 times in AArch64InstrFormats.td.
To meaningfully import it, the GlobalISelEmitter needs to know how to infer a
super register class for a given register class.
This patch introduces the following:
- `getSuperRegForSubReg`, a function which finds the largest register class
which supports a value type and subregister index
- `inferSuperRegisterClass`, a function which finds the appropriate super
register class for an INSERT_SUBREG'
- `inferRegClassFromPattern`, a function which allows for some trivial
lookthrough into instructions
- `getRegClassFromLeaf`, a helper function which returns the register class for
a leaf `TreePatternNode`
- Support for subregister index operands in `importExplicitUseRenderer`
It also
- Updates tests in each backend which are impacted by the change
- Adds GlobalISelEmitterSubreg.td to test that we import and skip the expected
patterns
As a result of this patch, INSERT_SUBREG patterns in X86 may use the
LOW32_ADDR_ACCESS_RBP register class instead of GR32. This is correct, since the
register class contains the same registers as GR32 (except with the addition of
RBP). So, this also teaches X86 to handle that register class. This is in line
with X86ISelLowering, which treats this as a GR class.
Differential Revision: https://reviews.llvm.org/D66498
llvm-svn: 369973
This requires std::intializer_list to be a literal type, which it is
starting with C++14. The downside is that std::bitset is still not
constexpr-friendly so this change contains a re-implementation of most
of it.
Shrinks clang by ~60k.
llvm-svn: 369847
Overloaded intrinsics can use iPTRAny in used/input operands.
The GlobalISelEmitter doesn't know that these are pointers, so it treats them
as scalars. As a result, these intrinsics can't be imported.
This teaches the GlobalISelEmitter to recognize these as pointers rather than
scalars.
Differential Revision: https://reviews.llvm.org/D65756
llvm-svn: 369455
AMDGPU has some buffer intrinsics which theoretically could use
this. Some of the generated tables include the 3 and 4 element vector
versions of these rounded to 64-bits, which is ambiguous. Add these to
help the table disambiguate these.
Assertion change is for the path odd sized vectors now take for R600.
v3i16 is widened to v4i16, which then needs to be promoted to v4i32.
llvm-svn: 369038
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.
llvm-svn: 369013
Summary:
The problem:
When an operand had bits explicitly set to "1" (as in the InitValue.td test case attached), the decoder was ignoring those bits, and the DecoderMethod was receiving an input where the bits were still zero.
The solution:
We added an "InitValue" variable that stores the initial value of the operand based on what bits were explicitly initialized to 1 in TableGen code. The generated decoder code then uses that initial value to initialize the "tmp" variable, then calls fieldFromInstruction to read the values for the remaining bits that were left unknown in TableGen.
This is mainly useful when there are variations of an instruction that differ based on what bits are set in the operands, since this change makes it possible to access those bits in a DecoderMethod. The DecoderMethod can use those bits to know how to handle the input.
Patch by Nicolas Guillemot
Reviewers: craig.topper, dsanders, fhahn
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D63741
llvm-svn: 368458
The upper 4 bits of the immediate byte are used to encode a
register. We need to limit the explicit immediate to fit in the
remaining 4 bits.
Fixes PR42899.
llvm-svn: 368123
This was causing a bug where non-truncating stores would be selected instead of truncating ones.
Differential Revision: https://reviews.llvm.org/D64845
llvm-svn: 367737
AMDGPU uses some custom code predicates for testing alignments.
I'm still having trouble comprehending the behavior of predicate bits
in the PatFrag hierarchy. Any attempt to abstract these properties
unexpectdly fails to apply them.
llvm-svn: 367373
Empty condition strings are considerde always true. This removes a lot
of clutter from the generated matcher tables.
This shrinks the source size of AMDGPUGenDAGISel.inc from 7.3M to
6.1M.
llvm-svn: 367326
This change reverts most of the previous register name generation.
The real problem is that RegisterTuple does not generate asm names.
Added optional operand to RegisterTuple. This way we can simplify
register name access and dramatically reduce the size of static
tables for the backend.
Differential Revision: https://reviews.llvm.org/D64967
llvm-svn: 366598
Summary:
Deduce the "willreturn" attribute for functions.
For now, intrinsics are not willreturn. More annotation will be done in another patch.
Reviewers: jdoerfert
Subscribers: jvesely, nhaehnle, nicholas, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63046
llvm-svn: 366335
If an intrinsic is defined without outputs, but having side effects,
it still can be removed completely from the program. This patch makes
TableGen not set Attribute::ReadNone for intrinsics which
are declared with IntrHasSideEffects.
Differential Revision: https://reviews.llvm.org/D64414
llvm-svn: 366312
Rather than an array of std::initializer_list, generate a table of
offsets and a flat array of the operands for getOperandType. This is a
bit more efficient on platforms that don't manage to get the array of
inintializer_lists initialized at link time (I'm looking at you
macOS). It's also quite quite a bit faster to compile.
llvm-svn: 366278
The InstrInfoEmitter outputs an enum called "OperandType" which gives
numerical IDs to each operand type. This patch makes use of this enum
to define a function called "getOperandType", which allows looking up
the type of an operand given its opcode and operand index.
Patch by Nicolas Guillemot. Thanks!
Differential Revision: https://reviews.llvm.org/D63320
llvm-svn: 366274
Summary:
We agreed to rename `except_ref` to `exnref` for consistency with other
reference types in
https://github.com/WebAssembly/exception-handling/issues/79. This also
renames WebAssemblyInstrExceptRef.td to WebAssemblyInstrRef.td in order
to use the file for other reference types in future.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64703
llvm-svn: 366145
This was failing to import the AMDGPU truncstore patterns. The
truncating stores from 32-bit to 8/16 were then somehow being
incorrectly selected to a 4-byte store.
A separate check is emitted for the LLT size in comparison to the
specific memory VT, which looks strange to me but makes sense based on
the hierarchy of PatFrags used for the default truncstore PatFrags.
llvm-svn: 366129
Currently AMDGPU uses a CodePatPred to check address spaces from the
MachineMemOperand. Introduce a new first class property so that the
existing patterns can be easily modified to uses the new generated
predicate, which will also be handled for GlobalISel.
I would prefer these to match against the pointer type of the
instruction, but that would be difficult to get working with
SelectionDAG compatbility. This is much easier for now and will avoid
a painful tablegen rewrite for all the loads and stores.
I'm also not sure if there's a better way to encode multiple address
spaces in the table, rather than putting the number to expect.
llvm-svn: 366128
Some out of tree backend require larger vector type. Since maintaining the changes out of tree is difficult due to the many manual changes needed when adding a new type we are adding it even if no backend currently use it.
Differential Revision: https://reviews.llvm.org/D64141
Patch by Thomas Raoux!
llvm-svn: 365274
When a Tablegen instruction description uses `OperandWithDefaultOps`,
isel patterns for that instruction don't have to fill in the default
value for the operand in question. But the flip side is that they
actually //can't// override the defaults even if they want to.
This will be very inconvenient for the Arm backend, when we start
wanting to write isel patterns that generate the many MVE predicated
vector instructions, in the form with predication actually enabled. So
this small Tablegen fix makes it possible to write an isel pattern
either with or without values for a defaulted operand, and have the
default values filled in only if they are not overridden.
If all the defaulted operands come at the end of the instruction's
operand list, there's a natural way to match them up to the arguments
supplied in the pattern: consume pattern arguments until you run out,
then fill in any missing instruction operands with their default
values. But if defaulted and non-defaulted operands are interleaved,
it's less clear what to do. This does happen in existing targets (the
first example I came across was KILLGT, in the AMDGPU/R600 backend),
and of course they expect the previous behaviour (that the default for
those operands is used and a pattern argument is not consumed), so for
backwards compatibility I've stuck with that.
Reviewers: nhaehnle, hfinkel, dmgreen
Subscribers: mehdi_amini, javed.absar, tpr, kristof.beyls, steven_wu, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63814
llvm-svn: 365114
r363233 rewrote a bunch of the Intrin Emitter code, however the new
function to update the arg codes did not properly consider a pointer to
an any. This patch adds that logic.
Differential Revision: https://reviews.llvm.org/D63507
llvm-svn: 364364
It seems macOS lets you have ArrayRef<const X> even though this is apparently
forbidden by the language standard (Thanks MSVC++ for the clear error message).
Removed the problematic const's to fix this.
(It also seems I'm not receiving buildbot emails anymore and I'm trying to find
out why. In the mean time I'll be polling lab.llvm.org to hopefully see if/when
failures occur)
llvm-svn: 363753
Summary:
Add an AdditionalEncoding class which can be used to define additional encodings
for a given instruction. This causes the disassembler to add an additional
encoding to its matching tables that map to the specified instruction.
Usage:
def ADD1 : Instruction {
bits<8> Reg;
bits<32> Inst;
let Size = 4;
let Inst{0-7} = Reg;
let Inst{8-14} = 0;
let Inst{15} = 1; // Continuation bit
let Inst{16-31} = 0;
...
}
def : AdditionalEncoding<ADD1> {
bits<8> Reg;
bits<16> Inst; // You can also have bits<32> and it will still be a 16-bit encoding
let Size = 2;
let Inst{0-3} = 0;
let Inst{4-7} = Reg;
let Inst{8-15} = 0;
...
}
with those definitions, llvm-mc will successfully disassemble both of these:
0x01 0x00
0x10 0x80 0x00 0x00
to:
ADD1 r1
Depends on D52366
Reviewers: bogner, charukcs
Reviewed By: bogner
Subscribers: nlguillemot, nhaehnle, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D52369
llvm-svn: 363744
Merging the two bits shrinks the context table from 16384 bytes to 8192 bytes.
Remove the ATTRIBUTE_BITS macro and just create an enum directly. Then fix the ATTR_max define to be 8192 to reflect the table size so we stop hardcoding it separately.
llvm-svn: 363330
This patch uses the mechanism from D62995 to strengthen the
definitions of the reduction intrinsics by letting the scalar
result/accumulator type be overloaded from the vector element type.
For example:
; The LLVM LangRef specifies that the scalar result must equal the
; vector element type, but this is not checked/enforced by LLVM.
declare i32 @llvm.experimental.vector.reduce.or.i32.v4i32(<4 x i32> %a)
This patch changes that into:
declare i32 @llvm.experimental.vector.reduce.or.v4i32(<4 x i32> %a)
Which has the type-constraint more explicit and causes LLVM to check
the result type with the vector element type.
Reviewers: RKSimon, arsenm, rnk, greened, aemerson
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D62996
llvm-svn: 363240
Extend the mechanism to overload intrinsic arguments by using either
backward or forward references to the overloadable arguments.
In for example:
def int_something : Intrinsic<[LLVMPointerToElt<0>],
[llvm_anyvector_ty], []>;
LLVMPointerToElt<0> is a forward reference to the overloadable operand
of type 'llvm_anyvector_ty' and would allow intrinsics such as:
declare i32* @llvm.something.v4i32(<4 x i32>);
declare i64* @llvm.something.v2i64(<2 x i64>);
where the result pointer type is deduced from the element type of the
first argument.
If the returned pointer is not a pointer to the element type, LLVM will
give an error:
Intrinsic has incorrect return type!
i64* (<4 x i32>)* @llvm.something.v4i32
Reviewers: RKSimon, arsenm, rnk, greened
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D62995
llvm-svn: 363233
Summary:
Use a set in getReqFeatures() in RISCVCompressInstEmitter instead of a map
because the index we save is not needed.
This also fixes bug 41666.
Reviewers: llvm-commits, apazos, asb, nickdesaulniers
Reviewed By: asb
Subscribers: Jim, nickdesaulniers, rbar, johnrusso, simoncook, niosHD, kito-cheng, shiva0217, jrtc27, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61412
llvm-svn: 362968
The ISD::STRICT_ nodes used to implement the constrained floating-point
intrinsics are currently never passed to the target back-end, which makes
it impossible to handle them correctly (e.g. mark instructions are depending
on a floating-point status and control register, or mark instructions as
possibly trapping).
This patch allows the target to use setOperationAction to switch the action
on ISD::STRICT_ nodes to Legal. If this is done, the SelectionDAG common code
will stop converting the STRICT nodes to regular floating-point nodes, but
instead pass the STRICT nodes to the target using normal SelectionDAG
matching rules.
To avoid having the back-end duplicate all the floating-point instruction
patterns to handle both strict and non-strict variants, we make the MI
codegen explicitly aware of the floating-point exceptions by introducing
two new concepts:
- A new MCID flag "mayRaiseFPException" that the target should set on any
instruction that possibly can raise FP exception according to the
architecture definition.
- A new MI flag FPExcept that CodeGen/SelectionDAG will set on any MI
instruction resulting from expansion of any constrained FP intrinsic.
Any MI instruction that is *both* marked as mayRaiseFPException *and*
FPExcept then needs to be considered as raising exceptions by MI-level
codegen (e.g. scheduling).
Setting those two new flags is straightforward. The mayRaiseFPException
flag is simply set via TableGen by marking all relevant instruction
patterns in the .td files.
The FPExcept flag is set in SDNodeFlags when creating the STRICT_ nodes
in the SelectionDAG, and gets inherited in the MachineSDNode nodes created
from it during instruction selection. The flag is then transfered to an
MIFlag when creating the MI from the MachineSDNode. This is handled just
like fast-math flags like no-nans are handled today.
This patch includes both common code changes required to implement the
new features, and the SystemZ implementation.
Reviewed By: andrew.w.kaylor
Differential Revision: https://reviews.llvm.org/D55506
llvm-svn: 362663
A std::array is implemented as a template with an array inside a struct.
Older versions of clang, like 3.6, require an extra set of curly braces
around std::array initializations to avoid warnings.
The C++ language was changed regarding this by CWG 1270. So more modern
tool chains does not complain even if leaving out one level of braces.
llvm-svn: 362360
Fix the misleadingly indentation introduced in rL362064. This will get rid of
the compiler warning, and it was actually a bug. This change will be used and
tested in D62669.
llvm-svn: 362211
If an assembly instruction has to mention an input operand name twice,
for example the MVE VMOV instruction that accesses two lanes of the
same vector by writing 'vmov r1, r2, q0[3], q0[1]', then the obvious
way to write its AsmString is to include the same operand (here $Qd)
twice. But this causes the AsmMatcher generator to omit that
instruction completely from the match table, on the basis that the
generator isn't clever enough to deal with the duplication.
But you need to have _some_ way of dealing with an instruction like
this - and in this case, where the mnemonic is shared with many other
instructions that the AsmMatcher does handle, it would be very painful
to take it out of the AsmMatcher system completely.
A nicer way is to add a custom AsmMatchConverter routine, and let that
deal with the problem if the autogenerated converter can't. But that
doesn't work, because TableGen leaves the instruction out of its table
_even_ if you provide a custom converter.
Solution: this change, which makes TableGen relax the restriction on
duplicated operands in the case where there's a custom converter.
Patch by: Simon Tatham
Differential Revision: https://reviews.llvm.org/D60695
llvm-svn: 362066
This is a new special identifier which you can use as a default in
OperandWithDefaultOps. The idea is that you use it for an input
operand of an instruction that's tied to an output operand, and its
semantics are that (in the default case) the input operand's value is
not used at all.
The detailed effect is that when instruction selection emits the
instruction in the form of a pre-regalloc MachineInstr, it creates an
IMPLICIT_DEF node to use as that input.
If you're creating an MCInst with explicit register names, then the
right handling would be to set the input operand to the same register
as the output one (honouring the tie) and to add the 'undef' flag
indicating that that register is deemed to acquire a new don't-care
definition just before we read it. But I haven't done that in this
commit, because there was no need to - no Tablegen backend seems to
autogenerate default fields in an MCInst.
Patch by: Simon Tatham
Differential Revision: https://reviews.llvm.org/D60696
llvm-svn: 362064
Summary:
This reverts commit r360106.
The revisioin causes llvm-tblgen to hang while generating info for
RISCV.td. The root cause might be in the RISCV.td definition but I don't
know enough about this to investigate further.
Command that starts hangning after r360106:
`llvm-build/bin/llvm-tblgen -I llvm/include -I llvm/tools/clang/include -I llvm/lib/Target/RISCV -gen-instr-info llvm/lib/Target/RISCV/RISCV.td`
Reviewers: sammccall, yan_luo, craig.topper, gribozavr
Reviewed By: gribozavr
Subscribers: PkmX, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61632
llvm-svn: 360136
If you have more than one schedule model in your TableGen target
definitions, then the diagnostic "No schedule information for
instruction 'foo'" is rather unhelpful, because it doesn't tell you
_which_ schedule model is missing the necessary information (or, as it
might be, missing the UnsupportedFeatures definition that would stop
it thinking it needed it).
Extended the message to include the name of the schedule model that
it's complaining about.
Reviewers: nhaehnle, hfinkel, javedabsar, efriedma, javed.absar
Reviewed By: javed.absar
Subscribers: javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60559
llvm-svn: 358389
The composite existed to simplify some other tablegen code and not really in an
important way. Remove the combined field and just calculate the vector size
using two ifs.
llvm-svn: 357972
The instruction's document this as W0 for the VEX encoding. But there's a
footnote mentioning that VEX.W is ignored in 64-bit mode. And the main VEX
encoding description says the VEX.W bit is ignored for instructions that are
equivalent to a legacy SSE instruction that uses REX.W to select a GPR which
would apply here.
By making this match EVEX we can remove a special case of allowing EVEX2VEX to
turn an EVEX.WIG instruction into VEX.W0.
llvm-svn: 357971
Summary:
This avoids needing an isel pattern for each condition code. And it removes translation switches for converting between Jcc instructions and condition codes.
Now the printer, encoder and disassembler take care of converting the immediate. We use InstAliases to handle the assembly matching. But we print using the asm string in the instruction definition. The instruction itself is marked IsCodeGenOnly=1 to hide it from the assembly parser.
Reviewers: spatel, lebedev.ri, courbet, gchatelet, RKSimon
Reviewed By: RKSimon
Subscribers: MatzeB, qcolombet, eraman, hiraditya, arphaman, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60228
llvm-svn: 357802
Summary:
This avoids needing an isel pattern for each condition code. And it removes translation switches for converting between SETcc instructions and condition codes.
Now the printer, encoder and disassembler take care of converting the immediate. We use InstAliases to handle the assembly matching. But we print using the asm string in the instruction definition. The instruction itself is marked IsCodeGenOnly=1 to hide it from the assembly parser.
Reviewers: andreadb, courbet, RKSimon, spatel, lebedev.ri
Reviewed By: andreadb
Subscribers: hiraditya, lebedev.ri, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60138
llvm-svn: 357801
Summary:
Reorder the condition code enum to match their encodings. Move it to MC layer so it can be used by the scheduler models.
This avoids needing an isel pattern for each condition code. And it removes
translation switches for converting between CMOV instructions and condition
codes.
Now the printer, encoder and disassembler take care of converting the immediate.
We use InstAliases to handle the assembly matching. But we print using the
asm string in the instruction definition. The instruction itself is marked
IsCodeGenOnly=1 to hide it from the assembly parser.
This does complicate the scheduler models a little since we can't assign the
A and BE instructions to a separate class now.
I plan to make similar changes for SETcc and Jcc.
Reviewers: RKSimon, spatel, lebedev.ri, andreadb, courbet
Reviewed By: RKSimon
Subscribers: gchatelet, hiraditya, kristina, lebedev.ri, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60041
llvm-svn: 357800
We were using the number of Matchables rather than the number of rows in the converter table.
This only matters for a few of the targets where the number of matchables is more than 255, but the number of converters is less than 255. Many of the targets have more than 256 converters. So already required a uint16_t.
llvm-svn: 357527
Summary:
It does not currently make sense to use WebAssembly features in some functions
but not others, so this CL adds an IR pass that takes the union of all used
feature sets and applies it to each function in the module. This allows us to
prevent atomics from being lowered away if some function has opted in to using
them. When atomics is not enabled anywhere, we detect whether there exists any
atomic operations or thread local storage that would be stripped and disallow
linking with objects that contain atomics if and only if atomics or tls are
stripped. When atomics is enabled, mark it as used but do not require it of
other objects in the link. These changes allow libraries that do not use atomics
to be built once and linked into both single-threaded and multithreaded
binaries.
Reviewers: aheejin, sbc100, dschuff
Subscribers: jgravelle-google, hiraditya, sunfish, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59625
llvm-svn: 357226
These were added in r355423.
We only use the autogenerated table to assist with the maintenance of the
manual table. These entries are alreayd in the manual table.
llvm-svn: 356357
AMDGPU would like to use these MVTs.
Differential Revision: https://reviews.llvm.org/D58901
Change-Id: I6125fea810d7cc62a4b4de3d9904255a1233ae4e
llvm-svn: 356351
Previously we had a regular form of the instruction used when the immediate was 0-7. And _alt form that allowed the full 8 bit immediate. Codegen would always use the 0-7 form since the immediate was always checked to be in range. Assembly parsing would use the 0-7 form when a mnemonic like vpcomtrueb was used. If the immediate was specified directly the _alt form was used. The disassembler would prefer to use the 0-7 form instruction when the immediate was in range and the _alt form otherwise. This way disassembly would print the most readable form when possible.
The assembly parsing for things like vpcomtrueb relied on splitting the mnemonic into 3 pieces. A "vpcom" prefix, an immediate representing the "true", and a suffix of "b". The tablegenerated printing code would similarly print a "vpcom" prefix, decode the immediate into a string, and then print "b".
The _alt form on the other hand parsed and printed like any other instruction with no specialness.
With this patch we drop to one form and solve the disassembly printing issue by doing custom printing when the immediate is 0-7. The parsing code has been tweaked to turn "vpcomtrueb" into "vpcomb" and then the immediate for the "true" is inserted either before or after the other operands depending on at&t or intel syntax.
I'd rather not do the custom printing, but I tried using an InstAlias for each possible mnemonic for all 8 immediates for all 16 combinations of element size, signedness, and memory/register. The code emitted into printAliasInstr ended up checking the number of operands, the register class of each operand, and the immediate for all 256 aliases. This was repeated for both the at&t and intel printer. Despite a lot of common checks between all of the aliases, when compiled with clang at least this commonality was not well optimized. Nor do all the checks seem necessary. Since I want to do a similar thing for vcmpps/pd/ss/sd which have 32 immediate values and 3 encoding flavors, 3 register sizes, etc. This didn't seem to scale well for clang binary size. So custom printing seemed a better trade off.
I also considered just using the InstAlias for the matching and not the printing. But that seemed like it would add a lot of extra rows to the matcher table. Especially given that the 32 immediates for vpcmpps have 46 strings associated with them.
Differential Revision: https://reviews.llvm.org/D59398
llvm-svn: 356343
This indicates an intrinsic parameter is required to be a constant,
and should not be replaced with a non-constant value.
Add the attribute to all AMDGPU and generic intrinsics that comments
indicate it should apply to. I scanned other target intrinsics, but I
don't see any obvious comments indicating which arguments are intended
to be only immediates.
This breaks one questionable testcase for the autoupgrade. I'm unclear
on whether the autoupgrade is supposed to really handle declarations
which were never valid. The verifier fails because the attributes now
refer to a parameter past the end of the argument list.
llvm-svn: 355981
AMDGPU target run out of Subtarget feature flags hitting the limit of 64.
AssemblerPredicates uses at most uint64_t for their representation.
At the same time CodeGen has exhausted this a long time ago and switched
to a FeatureBitset with the current limit of 192 bits.
This patch completes transition to the bitset for feature bits extending
it to asm matcher and MC code emitter.
Differential Revision: https://reviews.llvm.org/D59002
llvm-svn: 355839
Includes a fix to emit a CheckOpcode for build_vector when immAllZerosV/immAllOnesV is used as a pattern root. This means it can't be used to look through bitcasts when used as a root, but that's probably ok. This extra CheckOpcode will ensure that the first match in the isel table will be a SwitchOpcode which is needed by the caching optimization in the ISel Matcher.
Original commit message:
Previously we had build_vector PatFrags that called ISD::isBuildVectorAllZeros/Ones. Internally the ISD::isBuildVectorAllZeros/Ones look through bitcasts, but we aren't able to take advantage of that in isel. Instead of we have to canonicalize the types of the all zeros/ones build_vectors and insert bitcasts. Then we have to pattern match those exact bitcasts.
By emitting specific matchers for these 2 nodes, we can make isel look through any bitcasts without needing to explicitly match them. We should also be able to remove the canonicalization to vXi32 from lowering, but I've left that for a follow up.
This removes something like 40,000 bytes from the X86 isel table.
Differential Revision: https://reviews.llvm.org/D58595
llvm-svn: 355784
This caused the first matcher in the isel table for many targets to Opc_Scope instead of Opc_SwitchOpcode. This leads to a significant increase in isel match failures.
llvm-svn: 355433
These arrays are both keyed by CPU name and go into the same tablegenerated file. Merge them so we only need to store keys once.
This also removes a weird space saving quirk where we used the ProcDesc.size() to create to build an ArrayRef for ProcSched.
Differential Revision: https://reviews.llvm.org/D58939
llvm-svn: 355431
The description for CPUs was just the CPU name wrapped with "Select the " and " processor". We can just do that directly in the help printer instead of making a separate version in the binary for each CPU.
Also remove the Value field that isn't needed and was always 0.
Differential Revision: https://reviews.llvm.org/D58938
llvm-svn: 355429
Apparently older versions of clang like 3.6 require an extra set of curly braces around std::array initializations. I'm told the C++ language was changed regarding this by CWG 1270.
llvm-svn: 355327
Previously we had build_vector PatFrags that called ISD::isBuildVectorAllZeros/Ones. Internally the ISD::isBuildVectorAllZeros/Ones look through bitcasts, but we aren't able to take advantage of that in isel. Instead of we have to canonicalize the types of the all zeros/ones build_vectors and insert bitcasts. Then we have to pattern match those exact bitcasts.
By emitting specific matchers for these 2 nodes, we can make isel look through any bitcasts without needing to explicitly match them. We should also be able to remove the canonicalization to vXi32 from lowering, but I've left that for a follow up.
This removes something like 40,000 bytes from the X86 isel table.
Differential Revision: https://reviews.llvm.org/D58595
llvm-svn: 355224
Subtarget features are stored in a std::bitset that has been subclassed. There is a special constructor to allow the tablegen files to provide a list of bits to initialize the std::bitset to. This constructor isn't constexpr and std::bitset doesn't support many constexpr operations either. This results in a static global constructor being used to initialize the feature bitsets in these files at startup.
To fix this I've introduced a new FeatureBitArray class that holds three 64-bit values representing the initial bit values and taught tablegen to emit hex constants for them based on the feature enum values. This makes the tablegen files less readable than they were before. I can add the list of features back as a comment if we think that's important.
I've added a method to convert from this class into the std::bitset subclass we had before. I considered making the new FeatureBitArray class just implement the std::bitset interface we need instead, but thought I'd see how others felts about that first.
I've simplified the interfaces to SetImpliedBits and ClearImpliedBits a little minimize the number of times we need to convert to the bitset.
This removes about 27K from my local release+asserts build of llc.
Differential Revision: https://reviews.llvm.org/D58520
llvm-svn: 355167
The previous sort comparator was not deterministic, i.e. in some
situations it would be possible for lhs < rhs && rhs < lhs. This was
discovered by an STL assertion in a Windows debug build of llvm-tblgen.
Differential Revision: https://reviews.llvm.org/D58687
llvm-svn: 354910
The --disassembler-options, or -M, are used to customize
the disassembler and affect its output.
The two implemented options allow selecting register names on ARM:
* With -Mreg-names-raw, the disassembler uses rNN for all registers.
* With -Mreg-names-std it prints sp, lr and pc for r13, r14 and r15,
which is the default behavior of llvm-objdump.
Differential Revision: https://reviews.llvm.org/D57680
llvm-svn: 354870
More or less all the instructions defined in the v8.2a full-fp16
extension are defined as UNPREDICTABLE if you put them in an IT block
(Thumb) or use with any condition other than AL (ARM). LLVM didn't
know that, and was happy to conditionalise them.
In order to force these instructions to count as not predicable, I had
to make a small Tablegen change. The code generation back end mostly
decides if an instruction was predicable by looking for something it
can identify as a predicate operand; there's an isPredicable bit flag
that overrides that check in the positive direction, but nothing that
overrides it in the negative direction.
(I considered the alternative approach of actually removing the
predicate operand from those instructions, but thought that it would
be more painful overall for instructions differing only in data type
to have different shapes of operand list. This way, the only code that
has to notice the difference is the if-converter.)
So I've added an isUnpredicable bit alongside isPredicable, and set
that bit on the right subset of FP16 instructions, and also on the
VSEL, VMAXNM/VMINNM and VRINT[ANPM] families which should be
unpredicable for all data types.
I've included a couple of representative regression tests, both of
which previously caused an fp16 instruction to be conditionalised in
ARM state and (with -arm-no-restrict-it) to be put in an IT block in
Thumb.
Reviewers: SjoerdMeijer, t.p.northover, efriedma
Reviewed By: efriedma
Subscribers: jdoerfert, javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57823
llvm-svn: 354768
OPC_CheckCondCode is always used as operand 2 of a setcc. And its always surrounded by a MoveChild2 and a MoveParent. By having a dedicated opcode for this case we can reduce the number of bytes needed for this pattern from 4 bytes to 2.
This saves ~3000 bytes in the X86 table.
llvm-svn: 354763
Summary:
This adds support for defining patterns for global isel using pointer
types, for example:
def : Pat<(load GPR32:$src),
(p1 (LOAD GPR32:$src))>;
DAGISelEmitter will ignore the pointer information and treat these
types as integers with the same bit-width as the pointer type.
Reviewers: dsanders, rtereshin, arsenm
Reviewed By: arsenm
Subscribers: Petar.Avramovic, wdng, rovka, kristof.beyls, jfb, volkan, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57065
llvm-svn: 354510
This class is used for two difference tablegen generated tables. For one of the tables the Value FeatureBitset only has one bit set. For the other usage the Implies field was unused.
This patch changes the Value field to just be an unsigned. For the usage that put a real vector in bitset, we now use the previously unused Implies field and leave the Value field unused instead.
This is good for a 16K reduction in the size of llc on my local build with all targets enabled.
llvm-svn: 354243
Summary:
While working on the GISel Combiner, I noticed I was producing location-less
error messages fairly often and set about fixing this. In the process, I
noticed quite a few places elsewhere in TableGen that also neglected to include
a relevant location.
This patch adds locations to errors that relate to a specific record (or a
field within it) and also have easy access to the relevant location. This is
particularly useful when multiclasses are involved as many of these errors
refer to the full name of a record and it's difficult to guess which substring
is grep-able.
Unfortunately, tablegen currently only supports Record granularity so it's not
currently possible to point at a specific Init so these sometimes point at the
record that caused the error rather than the precise origin of the error.
Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, nhaehnle
Reviewed By: nhaehnle
Subscribers: jdoerfert, nhaehnle, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58077
llvm-svn: 353862
This patch adds a -time-regions option to tablegen that can enable timers
(currently only one) that assess the performance of tablegen itself. This
can be useful for identifying scaling problems with tablegen backends.
This particular timer has allowed me to ignore time that is not attributed
the GISel combiner pass. It's useful by itself but it is particularly
useful in combination with https://reviews.llvm.org/D52954 which causes
this period of time to be annotated within Xcode Instruments which in turn
allows profile samples and recorded allocations attributed to reading
instructions to be filtered out.
llvm-svn: 353763
If we run into a pattern that looks like this:
add
(complex $x, $y)
(complex $x, $z)
We should skip the pattern instead of asserting/doing something unpredictable.
This makes us return an Error in that case, and adds a testcase for skipped
patterns.
Differential Revision: https://reviews.llvm.org/D57980
llvm-svn: 353586
Summary:
There are a few instructions that all map to the same opcode, so
when disassembling, we have to pick one. That was just the first one
before (the except_ref variant in the case of "call"), now it is the
one marked as IsCanonical in tablegen, or failing that, the shortest
name (which is typically the "canonical" one).
Also introduced a canonical "end" instruction for this purpose.
Reviewers: dschuff, tlively
Subscribers: sbc100, jgravelle-google, aheejin, llvm-commits, sunfish
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57713
llvm-svn: 353131
LLVM_ENABLE_DAGISEL_COV can be used to instrument DAGISel tablegen
selection code to show which patterns along with Complex patterns were
used when selecting instructions. Unfortunately this is turned off by
default and was broken but never tested.
This required a simple fix (missing new line) to get it to build again.
llvm-svn: 353091
This patch replaces the existing LLVMVectorSameWidth matcher with LLVMScalarOrSameVectorWidth.
The matching args must be either scalars or vectors with the same number of elements, but in either case the scalar/element type can differ, specified by LLVMScalarOrSameVectorWidth.
I've updated the _overflow intrinsics to demonstrate this - allowing it to return a i1 or <N x i1> overflow result, matching the scalar/vectorwidth of the other (add/sub/mul) result type.
The masked load/store/gather/scatter intrinsics have also been updated to use this, although as we specify the reference type to be llvm_anyvector_ty we guarantee the mask will be <N x i1> so no change in behaviour
Differential Revision: https://reviews.llvm.org/D57090
llvm-svn: 351957
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
Summary:
Right now we include ${TGT}GenCallingConv.inc once per each instruction
selection method implemented by ${TGT}:
- ${TGT}ISelLowering.cpp
- ${TGT}CallLowering.cpp
- ${TGT}FastISel.cpp
Instead, add a mechanism to tablegen for marking a particular convention
as "External", which causes tablegen to emit into the ::llvm namespace,
instead of as a static helper. This allows us to provide a header to
forward declare it, so we can simply call the function from all the
places it is referenced. Typically the calling convention analyzer is
called indirectly, so it doesn't benefit from inlining.
This saves a bit of final binary size, but mostly just saves object file
size:
before after diff artifact
12852K 12492K -360K X86ISelLowering.cpp.obj
4640K 4280K -360K X86FastISel.cpp.obj
1704K 2092K +388K X86CallingConv.cpp.obj
52448K 52336K -112K llc.exe
I didn't collect before numbers for X86CallLowering.cpp.obj, which is
for GlobalISel, but we should save 360K there as well.
This patch applies the strategy to the X86 backend, but there is no
reason it couldn't be applied to the other backends that implement
multiple ISel strategies, like AArch64.
Reviewers: craig.topper, hfinkel, efriedma
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D56883
llvm-svn: 351616
Summary:
The previously introduced new operand type for br_table didn't have
a disassembler implementation, causing an assert.
Reviewers: dschuff, aheejin
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D56227
llvm-svn: 350366
When you define an instruction alias as a subclass of InstAlias, you
specify all the MC operands for the instruction it expands to, except
for operands that are tied to a previous one, which you leave out in
the expectation that the Tablegen output code will fill them in
automatically.
But the code in Tablegen's AsmWriter backend that skips over a tied
operand was doing it using 'if' instead of 'while', because it wasn't
expecting to find two tied operands in sequence.
So if an instruction updates a pair of registers in place, so that its
MC representation has two input operands tied to the output ones (for
example, Arm's UMLAL instruction), then any alias which wants to
expand to a special case of that instruction is likely to fail to
match, because the indices of subsequent operands will be off by one
in the generated printAliasInstr function.
This patch re-indents some existing code, so it's clearest when
viewed as a diff with whitespace changes ignored.
Reviewers: fhahn, rengolin, sdesmalen, atanasyan, asb, jholewinski, t.p.northover, kparzysz, craig.topper, stoklund
Reviewed By: rengolin
Subscribers: javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D53816
llvm-svn: 349141
One of the GCC based bots is objecting to a vector of const EncodingAndInst's:
In file included from /usr/include/c++/8/vector:64,
from /export/users/atombot/llvm/clang-atom-d525-fedora-rel/llvm/utils/TableGen/CodeGenInstruction.h:22,
from /export/users/atombot/llvm/clang-atom-d525-fedora-rel/llvm/utils/TableGen/FixedLenDecoderEmitter.cpp:15:
/usr/include/c++/8/bits/stl_vector.h: In instantiation of 'class std::vector<const {anonymous}::EncodingAndInst, std::allocator<const {anonymous}::EncodingAndInst> >':
/export/users/atombot/llvm/clang-atom-d525-fedora-rel/llvm/utils/TableGen/FixedLenDecoderEmitter.cpp:375:32: required from here
/usr/include/c++/8/bits/stl_vector.h:351:21: error: static assertion failed: std::vector must have a non-const, non-volatile value_type
static_assert(is_same<typename remove_cv<_Tp>::type, _Tp>::value,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/8/bits/stl_vector.h:354:21: error: static assertion failed: std::vector must have the same value_type as its allocator
static_assert(is_same<typename _Alloc::value_type, _Tp>::value,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
llvm-svn: 349046
Summary:
Separate the concept of an encoding from an instruction. This will enable
the definition of additional encodings for the same instruction which can
be used to support variable length instruction sets in the disassembler
(and potentially assembler but I'm not working towards that right now)
without causing an explosion in the number of Instruction records that
CodeGen then has to pick between.
Reviewers: bogner, charukcs
Reviewed By: bogner
Subscribers: kparzysz, llvm-commits
Differential Revision: https://reviews.llvm.org/D52366
llvm-svn: 349041
Summary:
This fixes support in DAGISelMatcher backend for DAG nodes with multiple
result values. Previously the order of results in selected DAG nodes always
matched the order of results in ISel patterns. After the change the order of
results matches the order of operands in OutOperandList instead.
For example, given this definition from the attached test case:
def INSTR : Instruction {
let OutOperandList = (outs GPR:$r1, GPR:$r0);
let InOperandList = (ins GPR:$t0, GPR:$t1);
let Pattern = [(set i32:$r0, i32:$r1, (udivrem i32:$t0, i32:$t1))];
}
the DAGISelMatcher backend currently produces a matcher that creates INSTR
nodes with the first result `$r0` and the second result `$r1`, contrary to the
order in the OutOperandList. The order of operands in OutOperandList does not
matter at all, which is unexpected (and unfortunate) because the order of
results of a DAG node does matters, perhaps a lot.
With this change, if the order in OutOperandList does not match the order in
Pattern, DAGISelMatcherGen emits CompleteMatch opcodes with the order of
results taken from OutOperandList. Backend writers can use it to express
result reorderings in TableGen.
If the order in OutOperandList matches the order in Pattern, the result of
DAGISelMatcherGen is unaffected.
Patch by Eugene Sharygin
Reviewers: andreadb, bjope, hfinkel, RKSimon, craig.topper
Reviewed By: craig.topper
Subscribers: nhaehnle, craig.topper, llvm-commits
Differential Revision: https://reviews.llvm.org/D55055
llvm-svn: 348326
Currently, variadic operands on an MCInst are assumed to be uses,
because they come after the defs. However, this is not always the case,
for example the Arm/Thumb LDM instructions write to a variable number of
registers.
This adds a property of instruction definitions which can be used to
mark variadic operands as defs. This only affects MCInst, because
MachineInstruction already tracks use/def per operand in each instance
of the instruction, so can already represent this.
This property can then be checked in MCInstrDesc, allowing us to remove
some special cases in ARMAsmParser::isITBlockTerminator.
Differential revision: https://reviews.llvm.org/D54853
llvm-svn: 348114
Simple predicates, such as those defined by `CheckRegOperandSimple` or
`CheckImmOperandSimple`, were not being negated when used with `CheckNot`.
This change fixes this issue by defining the previously declared methods to
handle simple predicates.
Differential revision: https://reviews.llvm.org/D55089
llvm-svn: 348034
Summary:
This simplifies writing predicates for pattern fragments that are
automatically re-associated or commuted.
For example, a followup patch adds patterns for fragments of the form
(add (shl $x, $y), $z) to the AMDGPU backend. Such patterns are
automatically commuted to (add $z, (shl $x, $y)), which makes it basically
impossible to refer to $x, $y, and $z generically in the PredicateCode.
With this change, the PredicateCode can refer to $x, $y, and $z simply
as `Operands[i]`.
Test confirmed that there are no changes to any of the generated files
when building all (non-experimental) targets.
Change-Id: I61c00ace7eed42c1d4edc4c5351174b56b77a79c
Reviewers: arsenm, rampitec, RKSimon, craig.topper, hfinkel, uweigand
Subscribers: wdng, tpr, llvm-commits
Differential Revision: https://reviews.llvm.org/D51994
llvm-svn: 347992
When tablegen detects that there exist two subregister compositions that
result in the same value for some register, it will emit a warning. This
kind of an overlap in compositions should only happen when it is caused
by a user-defined composition. It can happen, however, that the user-
defined composition is not identically equal to another one, but it does
produce the same value for one or more registers. In such cases suppress
the warning.
This patch is to silence the warning when building the System Z backend
after D50725.
Differential Revision: https://reviews.llvm.org/D50977
llvm-svn: 347894
This patch adds the ability to specify via tablegen which processor resources
are load/store queue resources.
A new tablegen class named MemoryQueue can be optionally used to mark resources
that model load/store queues. Information about the load/store queue is
collected at 'CodeGenSchedule' stage, and analyzed by the 'SubtargetEmitter' to
initialize two new fields in struct MCExtraProcessorInfo named `LoadQueueID` and
`StoreQueueID`. Those two fields are identifiers for buffered resources used to
describe the load queue and the store queue.
Field `BufferSize` is interpreted as the number of entries in the queue, while
the number of units is a throughput indicator (i.e. number of available pickers
for loads/stores).
At construction time, LSUnit in llvm-mca checks for the presence of extra
processor information (i.e. MCExtraProcessorInfo) in the scheduling model. If
that information is available, and fields LoadQueueID and StoreQueueID are set
to a value different than zero (i.e. the invalid processor resource index), then
LSUnit initializes its LoadQueue/StoreQueue based on the BufferSize value
declared by the two processor resources.
With this patch, we more accurately track dynamic dispatch stalls caused by the
lack of LS tokens (i.e. load/store queue full). This is also shown by the
differences in two BdVer2 tests. Stalls that were previously classified as
generic SCHEDULER FULL stalls, are not correctly classified either as "load
queue full" or "store queue full".
About the differences in the -scheduler-stats view: those differences are
expected, because entries in the load/store queue are not released at
instruction issue stage. Instead, those are released at instruction executed
stage. This is the main reason why for the modified tests, the load/store
queues gets full before PdEx is full.
Differential Revision: https://reviews.llvm.org/D54957
llvm-svn: 347857
There are quite strong constraints on how you can use the TIED_TO
constraint between MC operands, many of which are currently not
checked until compiler run time.
MachineVerifier enforces that operands can only be tied together in
pairs (no three-way ties), and MachineInstr::tieOperands enforces that
one of the tied operands must be an output operand (def) and the other
must be an input operand (use).
Now we check these at TableGen time, so that if you violate any of
them in a new instruction definition, you find out immediately,
instead of having to wait until you compile something that makes code
generation hit one of those assertions.
Also in this commit, all the error reports in ParseConstraint now
include the name and source location of the def where the problem
happened, so that if you do trigger any of these errors, it's easier
to find the part of your TableGen input where you made the mistake.
The trunk sources already build successfully with this additional
error check, so I think no in-tree target has any of these problems.
Reviewers: fhahn, lhames, nhaehnle, MatzeB
Reviewed By: MatzeB
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D53815
llvm-svn: 347743
`llvm-mca` relies on the predicates to be based on `MCSchedPredicate` in order
to resolve the scheduling for variant instructions. Otherwise, it aborts
the building of the instruction model early.
However, the scheduling model emitter in `TableGen` gives up too soon, unless
all processors use only such predicates.
In order to allow more processors to be used with `llvm-mca`, this patch
emits scheduling transitions if any processor uses these predicates. The
transition emitted for the processors using legacy predicates is the one
specified with `NoSchedPred`, which is based on `MCSchedPredicate`.
Preferably, `llvm-mca` should instead assume a reasonable default when a
variant transition is not based on `MCSchedPredicate` for a given processor.
This issue should be revisited in the future.
Differential revision: https://reviews.llvm.org/D54648
llvm-svn: 347504
A call to @llvm.trap can be expected to be cold (i.e. unlikely to be
reached in a normal program execution).
Outlining paths which unconditionally trap is an important memory
saving. As the hot/cold splitting pass (imho) should not treat all
noreturn calls as cold, explicitly mark @llvm.trap cold so that it can
be outlined.
Split out of https://reviews.llvm.org/D54244.
Differential Revision: https://reviews.llvm.org/D54329
llvm-svn: 346885
Summary:
This simplifies the code and moves everything to tablegen for consistency. This
also prepares the ground for adding issue counters.
Reviewers: gchatelet, john.brawn, jsji
Subscribers: nemanjai, mgorny, javed.absar, kbarton, tschuett, llvm-commits
Differential Revision: https://reviews.llvm.org/D54297
llvm-svn: 346489
Summary:
As a bonus, this arguably improves the code by making it simpler.
gcc 8 on Ubuntu 18.10 reports the following:
==39667==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7fffffff8ae0 at pc 0x555555dbfc68 bp 0x7fffffff8760 sp 0x7fffffff8750
WRITE of size 8 at 0x7fffffff8ae0 thread T0
#0 0x555555dbfc67 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider::_Alloc_hider(char*, std::allocator<char>&&) /usr/include/c++/8/bits/basic_string.h:149
#1 0x555555dbfc67 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&) /usr/include/c++/8/bits/basic_string.h:542
#2 0x555555dbfc67 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > std::operator+<char, std::char_traits<char>, std::allocator<char> >(char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&) /usr/include/c++/8/bits/basic_string.h:6009
#3 0x555555dbfc67 in searchableFieldType /home/nha/amd/build/san/llvm-src/utils/TableGen/SearchableTableEmitter.cpp:168
(...)
Address 0x7fffffff8ae0 is located in stack of thread T0 at offset 864 in frame
#0 0x555555dbef3f in searchableFieldType /home/nha/amd/build/san/llvm-src/utils/TableGen/SearchableTableEmitter.cpp:148
Reviewers: fhahn, simon_tatham, kparzysz
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D53931
llvm-svn: 345749
Before this patch, class PredicateExpander only knew how to expand simple
predicates that performed checks on instruction operands.
In particular, the new scheduling predicate syntax was not rich enough to
express checks like this one:
Foo(MI->getOperand(0).getImm()) == ExpectedVal;
Here, the immediate operand value at index zero is passed in input to function
Foo, and ExpectedVal is compared against the value returned by function Foo.
While this predicate pattern doesn't show up in any X86 model, it shows up in
other upstream targets. So, being able to support those predicates is
fundamental if we want to be able to modernize all the scheduling models
upstream.
With this patch, we allow users to specify if a register/immediate operand value
needs to be passed in input to a function as part of the predicate check. Now,
register/immediate operand checks all derive from base class CheckOperandBase.
This patch also changes where TIIPredicate definitions are expanded by the
instructon info emitter. Before, definitions were expanded in class
XXXGenInstrInfo (where XXX is a target name).
With the introduction of this new syntax, we may want to have TIIPredicates
expanded directly in XXXInstrInfo. That is because functions used by the new
operand predicates may only exist in the derived class (i.e. XXXInstrInfo).
This patch is a non functional change for the existing scheduling models.
In future, we will be able to use this richer syntax to better describe complex
scheduling predicates, and expose them to llvm-mca.
Differential Revision: https://reviews.llvm.org/D53880
llvm-svn: 345714
Summary:
The pfm counters are now in the ExegesisTarget rather than the
MCSchedModel (PR39165).
This also compresses the pfm counter tables (PR37068).
Reviewers: RKSimon, gchatelet
Subscribers: mgrang, llvm-commits
Differential Revision: https://reviews.llvm.org/D52932
llvm-svn: 345243
Summary:
Some targets have very long encodings and uint64_t isn't sufficient. uint128_t
isn't portable so such targets need to use an object instead.
There is one catch with this at the moment, no string of bits extracted
from the encoding may exceeed 64-bits. Fields are still permitted to
exceed 64-bits so long as they aren't one contiguous string of bits. If
this proves to be a problem then we can modify the generation of
fieldFromInstruction() calls to account for it but for now I've added an
assertion for this.
InsnType must either be integral or an APInt-like object that must:
* Have a static const max_size_in_bits equal to the number of bits in the encoding.
* be default-constructible and copy-constructible
* be constructible from a uint64_t (this is the key area the interface deviates
from APInt since this constructor does not take the bit width)
* be constructible from an APInt (this can be private)
* be convertible to uint64_t
* Support the ~, &,, ==, !=, and |= operators with other objects of the same type
* Support shift (<<, >>) with signed and unsigned integers on the RHS
* Support put (<<) to raw_ostream&
Reviewers: bogner, charukcs
Subscribers: nhaehnle, llvm-commits
Differential Revision: https://reviews.llvm.org/D52100
llvm-svn: 345056
Summary:
Replace its functionality with a TableGen InstrInfo relational
instruction mapping. Although arguably more complex than the TableGen
backend, the relational mapping is a smaller maintenance burden than a
TableGen backend.
Reviewers: aardappel, aheejin, dschuff
Subscribers: mgorny, sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D53307
llvm-svn: 344962
This patch adds the ability to identify instructions that are "move elimination
candidates". It also allows scheduling models to describe processor register
files that allow move elimination.
A move elimination candidate is an instruction that can be eliminated at
register renaming stage.
Each subtarget can specify which instructions are move elimination candidates
with the help of tablegen class "IsOptimizableRegisterMove" (see
llvm/Target/TargetInstrPredicate.td).
For example, on X86, BtVer2 allows both GPR and MMX/SSE moves to be eliminated.
The definition of 'IsOptimizableRegisterMove' for BtVer2 looks like this:
```
def : IsOptimizableRegisterMove<[
InstructionEquivalenceClass<[
// GPR variants.
MOV32rr, MOV64rr,
// MMX variants.
MMX_MOVQ64rr,
// SSE variants.
MOVAPSrr, MOVUPSrr,
MOVAPDrr, MOVUPDrr,
MOVDQArr, MOVDQUrr,
// AVX variants.
VMOVAPSrr, VMOVUPSrr,
VMOVAPDrr, VMOVUPDrr,
VMOVDQArr, VMOVDQUrr
], CheckNot<CheckSameRegOperand<0, 1>> >
]>;
```
Definitions of IsOptimizableRegisterMove from processor models of a same
Target are processed by the SubtargetEmitter to auto-generate a target-specific
override for each of the following predicate methods:
```
bool TargetSubtargetInfo::isOptimizableRegisterMove(const MachineInstr *MI)
const;
bool MCInstrAnalysis::isOptimizableRegisterMove(const MCInst &MI, unsigned
CPUID) const;
```
By default, those methods return false (i.e. conservatively assume that there
are no move elimination candidates).
Tablegen class RegisterFile has been extended with the following information:
- The set of register classes that allow move elimination.
- Maxium number of moves that can be eliminated every cycle.
- Whether move elimination is restricted to moves from registers that are
known to be zero.
This patch is structured in three part:
A first part (which is mostly boilerplate) adds the new
'isOptimizableRegisterMove' target hooks, and extends existing register file
descriptors in MC by introducing new fields to describe properties related to
move elimination.
A second part, uses the new tablegen constructs to describe move elimination in
the BtVer2 scheduling model.
A third part, teaches llm-mca how to query the new 'isOptimizableRegisterMove'
hook to mark instructions that are candidates for move elimination. It also
teaches class RegisterFile how to describe constraints on move elimination at
PRF granularity.
llvm-mca tests for btver2 show differences before/after this patch.
Differential Revision: https://reviews.llvm.org/D53134
llvm-svn: 344334
Summary:
The predicate function is added in InlinePatternFragments, no need to
do it here. As a result, all uses of addPredicateFn are located in
InlinePatternFragments.
Test confirmed that there are no changes to generated files when
building all (non-experimental) targets.
Change-Id: I720e42e045ca596eb0aa339fb61adf6fe71034d5
Reviewers: arsenm, rampitec, RKSimon, craig.topper, hfinkel, uweigand
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D51993
llvm-svn: 343977
There are a few leftovers in rL343163 which span two lines. This commit
changes these llvm::sort(C.begin(), C.end, ...) to llvm::sort(C, ...)
llvm-svn: 343426
Summary:
By using the existing isCodeGenOnly bit in the tablegen defs, as
suggested by tlively in https://reviews.llvm.org/D51662
Tested: llvm-lit -v `find test -name WebAssembly`
Reviewers: tlively
Subscribers: dschuff, sbc100, jgravelle-google, aheejin, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D52373
llvm-svn: 342772
Summary:
This ensures we have the non-register version of the instruction.
The stack version of call_indirect now wants a type index argument,
so that has been added in the existing tests.
Tested:
llvm-lit -v `find test -name WebAssembly`
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, aheejin, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D51662
llvm-svn: 342753
The reason why build #25777 might have failed is because the SmallVector move
constructor is _not_ noexcept, and the stl implementation used by that buildbot
calls _VSTD::move_if_noexcept() (according to the backtrace).
OpcodeInfo has a default move constructor, and the copy constructor is deleted.
However, as far as I can see, SmallVector doesn't declare a noexcept move
constructor. So, what I believe it is happening here is that,
_VSTD::move_if_noexcept() returns an lvalue reference and not an rvalue
reference.
This eventually triggers a copy that fails to compile.
Hopefully, using a std::vector instead of SmallVector (as it was originally
suggested by Simon in the code review) should be enough to unbreak the buildbot.
llvm-svn: 342561
This patch adds the ability for processor models to describe dependency breaking
instructions.
Different processors may specify a different set of dependency-breaking
instructions.
That means, we cannot assume that all processors of the same target would use
the same rules to classify dependency breaking instructions.
The main goal of this patch is to provide the means to describe dependency
breaking instructions directly via tablegen, and have the following
TargetSubtargetInfo hooks redefined in overrides by tabegen'd
XXXGenSubtargetInfo classes (here, XXX is a Target name).
```
virtual bool isZeroIdiom(const MachineInstr *MI, APInt &Mask) const {
return false;
}
virtual bool isDependencyBreaking(const MachineInstr *MI, APInt &Mask) const {
return isZeroIdiom(MI);
}
```
An instruction MI is a dependency-breaking instruction if a call to method
isDependencyBreaking(MI) on the STI (TargetSubtargetInfo object) evaluates to
true. Similarly, an instruction MI is a special case of zero-idiom dependency
breaking instruction if a call to STI.isZeroIdiom(MI) returns true.
The extra APInt is used for those targets that may want to select which machine
operands have their dependency broken (see comments in code).
Note that by default, subtargets don't know about the existence of
dependency-breaking. In the absence of external information, those method calls
would always return false.
A new tablegen class named STIPredicate has been added by this patch to let
processor models classify instructions that have properties in common. The idea
is that, a MCInstrPredicate definition can be used to "generate" an instruction
equivalence class, with the idea that instructions of a same class all have a
property in common.
STIPredicate definitions are essentially a collection of instruction equivalence
classes.
Also, different processor models can specify a different variant of the same
STIPredicate with different rules (i.e. predicates) to classify instructions.
Tablegen backends (in this particular case, the SubtargetEmitter) will be able
to process STIPredicate definitions, and automatically generate functions in
XXXGenSubtargetInfo.
This patch introduces two special kind of STIPredicate classes named
IsZeroIdiomFunction and IsDepBreakingFunction in tablegen. It also adds a
definition for those in the BtVer2 scheduling model only.
This patch supersedes the one committed at r338372 (phabricator review: D49310).
The main advantages are:
- We can describe subtarget predicates via tablegen using STIPredicates.
- We can describe zero-idioms / dep-breaking instructions directly via
tablegen in the scheduling models.
In future, the STIPredicates framework can be used for solving other problems.
Examples of future developments are:
- Teach how to identify optimizable register-register moves
- Teach how to identify slow LEA instructions (each subtarget defining its own
concept of "slow" LEA).
- Teach how to identify instructions that have undocumented false dependencies
on the output registers on some processors only.
It is also (in my opinion) an elegant way to expose knowledge to both external
tools like llvm-mca, and codegen passes.
For example, machine schedulers in LLVM could reuse that information when
internally constructing the data dependency graph for a code region.
This new design feature is also an "opt-in" feature. Processor models don't have
to use the new STIPredicates. It has all been designed to be as unintrusive as
possible.
Differential Revision: https://reviews.llvm.org/D52174
llvm-svn: 342555
Further extension to D51035, this patch avoids all repeated predicates[] matching by caching as it collects the patterns that have multiple variants.
Saves around 25secs in debug builds of x86 -gen-dag-isel.
Differential Revision: https://reviews.llvm.org/D51839
llvm-svn: 342467
Summary:
Now uses the StackBased bit from the tablegen defs to identify
stack instructions (and ignore register based or non-wasm instructions).
Also changed how we store operands, since we now have up to 16 of them
per instruction. To not cause static data bloat, these are compressed
into a tiny table.
+ a few other cleanups.
Tested:
- MCTest
- llvm-lit -v `find test -name WebAssembly`
Reviewers: dschuff, jgravelle-google, sunfish, tlively
Subscribers: sbc100, aheejin, llvm-commits
Differential Revision: https://reviews.llvm.org/D51320
llvm-svn: 341081
Summary:
Add comments to help readers avoid having to read tablegen backends to
understand the code. Also remove unecessary breaks from the output.
Reviewers: dschuff, aheejin
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D51371
llvm-svn: 340864
CodeGenDAGPatterns::GenerateVariants is a costly function in many tblgen commands (33.87% of the total runtime of x86 -gen-dag-isel), and due to the O(N^2) nature of the function, there are a high number of repeated comparisons of the pattern's vector<Predicate>.
This initial patch at least avoids repeating these comparisons for every Variant in a pattern. I began investigating caching all the matches before entering the loop but hit issues with how best to store the data and how to update the cache as patterns were added.
Saves around 15secs in debug builds of x86 -gen-dag-isel.
Differential Revision: https://reviews.llvm.org/D51035
llvm-svn: 340837
Summary:
The new stackification backend generates the giant switch statement
used to translate instructions to their stackified forms. I did this
because it was more interesting than adding all the different vector
versions of the various SIMD instructions to the switch statment
manually.
Reviewers: aardappel, aheejin, dschuff
Subscribers: mgorny, sbc100, jgravelle-google, sunfish, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D51318
llvm-svn: 340781
Summary:
So far, `isReturn` property is used to mean both a return instruction
from a functon and the end of an EH scope, a scope that starts with a EH
scope entry BB and ends with a catchret or a cleanupret instruction.
Because WinEH uses funclets, all EH-scope-ending instructions are also
real return instruction from a function. But for wasm, they only serve
as the end marker of an EH scope but not a return instruction that
exits a function. This mismatch caused incorrect prolog and epilog
generation in wasm EH scopes. This patch fixes this.
This patch is in the same vein with rL333045, which splits
`MachineBasicBlock::isEHFuncletEntry` into `isEHFuncletEntry` and
`isEHScopeEntry`.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D50653
llvm-svn: 340325
We were just caching the MVT set of legal types, then every call creating a new TypeSetByHwMode with it and passing it back on the stack. There's no need to do this - we can create and cache the whole TypeSetByHwMode once and return a const reference to it each time.
Additionally, TypeInfer::expandOverloads wasn't making use of the fact that the cache just contains a default mode containing all the types.
Saves up to 30secs in debug builds of x86 -gen-dag-isel.
Differential Revision: https://reviews.llvm.org/D50903
llvm-svn: 340042
This operator is called a great deal, by checking for the cheap isSimple equality cases first (a common occurrence) we can improve performance as we avoid a lot of std::map find/iteration in hasDefault.
isSimple also means that a default value is present, so we can avoid some hasDefault calls.
This also avoids a rather dodgy piece of logic that was checking for isSimple() && !VTS.isSimple() but not the inverse - it now uses the general hasDefault mode comparison test instead.
Saves around 15secs in debug builds of x86 -gen-dag-isel.
Differential Revision: https://reviews.llvm.org/D50841
llvm-svn: 339890
I noticed this during profiling of tablegen (PR28222) that we were calling Child->getType(0) which creates a ValueTypeByHwMode on the fly from the requested internal TypeSetByHwMode type and returns it by value, we then treat it as a TypeSetByHwMode reference which involves constructing a new TypeSetByHwMode on the stack with a large amount of std::map iterating/copying all along the way.
I am not an expert on tablegen, but AFAICT this is all unnecessary and we should be calling Child->getExtType(0) which returns the original TypeSetByHwMode by reference.
This gives me a 90sec reduction in msvc debug builds of x86 -gen-dag-isel.
Differential Revision: https://reviews.llvm.org/D50789
llvm-svn: 339812
This patch removes redundant template argument `TargetName` from TIIPredicate.
Tablegen can always infer the target name from the context. So we don't need to
force users of TIIPredicate to always specify it.
This allows us to better modularize the tablegen class hierarchy for the
so-called "function predicates". class FunctionPredicateBase has been added; it
is currently used as a building block for TIIPredicates. However, I plan to
reuse that class to model other function predicate classes too (i.e. not just
TIIPredicates). For example, this can be a first step towards implementing
proper support for dependency breaking instructions in tablegen.
This patch also adds a verification step on TIIPredicates in tablegen.
We cannot have multiple TIIPredicates with the same name. Otherwise, this will
cause build errors later on, when tablegen'd .inc files are included by cpp
files and then compiled.
Differential Revision: https://reviews.llvm.org/D50708
llvm-svn: 339706
The behavior in 64-bit mode is different between Intel and AMD CPUs. Intel ignores the 0x66 prefix. AMD does not. objump doesn't ignore the 0x66 prefix. Since LLVM aims to match objdump behavior, we should do the same.
While I was trying to fix this I had change brtarget16/32 to use ENCODING_IW/ID instead of ENCODING_Iv to get the 0x66+REX.W case to act sort of sanely. It's still wrong, but that's a problem for another day.
The change in encoding exposed the fact that 16-bit mode disassembly of relative jumps was creating JMP_4 with a 2 byte immediate. It should have been JMP_2. From just printing you can't tell the difference, but if you dumped the encoding it wouldn't have matched what we started with.
While fixing that, it exposed that jo/jno opcodes were missing from the switch that this patch deleted and there were no test cases for them.
Fixes PR38537.
llvm-svn: 339622
This is a follow-up of r339552.
As pointed out by Craig in D50566, we don't need a formatted_raw_ostream to
indent strings. We can use instead raw_ostream::indent().
Internally, class PredicateExpander already keeps track of the current
indentation level. Also, the grammar for predicates is well parenthesized, and
therefore we don't need to use a formatted_raw_ostream to continuously track the
column number. Instead we can safely replace all the uses of
formatted_raw_ostream::PadToColumn() with uses of raw_ostream::indent().
By replacing formatted_raw_ostream with a simpler raw_ostream, we also avoid the
implicit check on the newline character on every print to stream.
No functional change intended.
llvm-svn: 339577
This patch refactors the logic that expands predicates of a variant scheduling
class.
The idea is to improve the readability of the auto-generated code by removing
redundant parentheses around predicate expressions, and by removing redundant
if(true) statements.
This patch replaces the definition of NoSchedPred in TargetSchedule.td with an
instance of MCSchedPredicate. The new definition is sematically equivalent to
the previous one. The main difference is that now SubtargetEmitter knows that it
represents predicate "true".
Before this patch, we always generated an if (true) for the default transition
of a variant scheduling class.
Example (taken from AArch64GenSubtargetInfo.inc) :
```
if (SchedModel->getProcessorID() == 3) { // CycloneModel
if ((TII->isScaledAddr(*MI)))
return 927; // (WriteIS_WriteLD)_ReadBaseRS
if ((true))
return 928; // WriteLD_ReadDefault
}
```
Extra parentheses were also generated around the predicate expressions.
With this patch, we get the following auto-generated checks:
```
if (SchedModel->getProcessorID() == 3) { // CycloneModel
if (TII->isScaledAddr(*MI))
return 927; // (WriteIS_WriteLD)_ReadBaseRS
return 928; // WriteLD_ReadDefault
}
```
The new auto-generated code behaves exactly the same as before. So, technically
this is a non functional change.
Differential revision: https://reviews.llvm.org/D50566
llvm-svn: 339552
Part of the logic has been moved to helper functions to (hopefully) improve
readability.
Added a few code comments to better describe how the algorithm works.
No functional change intended.
llvm-svn: 339421
This patch introduces tablegen class MCStatement.
Currently, an MCStatement can be either a return statement, or a switch
statement.
```
MCStatement:
MCReturnStatement
MCOpcodeSwitchStatement
```
A MCReturnStatement expands to a return statement, and the boolean expression
associated with the return statement is described by a MCInstPredicate.
An MCOpcodeSwitchStatement is a switch statement where the condition is a check
on the machine opcode. It allows the definition of multiple checks, as well as a
default case. More details on the grammar implemented by these two new
constructs can be found in the diff for TargetInstrPredicates.td.
This patch makes it easier to read the body of auto-generated TargetInstrInfo
predicates.
In future, I plan to reuse/extend the MCStatement grammar to describe more
complex target hooks. For now, this is just a first step (mostly a minor
cosmetic change to polish the new predicates framework).
Differential Revision: https://reviews.llvm.org/D50457
llvm-svn: 339352
Summary:
The interface to get size and spill size of a register
was moved from MCRegisterInfo to TargetRegisterInfo over
a year ago. Afaik the old interface has bee around
to give out-of-tree targets a chance to adapt to the
new interface.
One problem with the old MCRegisterClass::PhysRegSize was that
it represented the size of a register as "size in bits" / 8.
So a register had to be a multiple of eight bits wide for the
size to be correct (and the byte size for the target needed to
be eight bits).
Reviewers: kparzysz, qcolombet
Reviewed By: kparzysz
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D47199
llvm-svn: 339350
Summary:
This particular map is hardly ever queried and has a phased usage pattern (insert,
iterate, query, insert, iterate) so it's a good candidate for a sorted vector and
std::lower_bound.
This significantly reduces the run time of runTargetDesc() in some circumstances.
One llvm-tblgen invocation in my build improves the time spent in runTargetDesc()
from 9.86s down to 0.80s (~92%) without changing the output. The same invocation
also has 2GB less allocation churn.
Reviewers: bogner, rtereshin, aditya_nandakumar, volkan
Reviewed By: rtereshin
Subscribers: mgrang, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D50272
llvm-svn: 339208
The DAG combiner logic to simplify AND masks in shift counts is invalid.
While it is true that the SystemZ shift instructions ignore all but the
low 6 bits of the shift count, it is still invalid to simplify the AND
masks while the DAG still uses the standard shift operators (which are
*not* defined to match the SystemZ instruction behavior).
Instead, this patch performs equivalent operations during instruction
selection. For completely removing the AND, this now happens via
additional DAG match patterns implemented by a multi-alternative
PatFrags. For simplifying a 32-bit AND to a 16-bit AND, the existing DAG
patterns were already mostly OK, they just needed an output XForm to
actually truncate the immediate value.
Unfortunately, the latter change also exposed a bug in TableGen: it
seems XForms are currently only handled correctly for direct operands of
the outermost operation node. This patch also fixes that bug by simply
recurring through the whole pattern. This should be NFC for all other
targets.
Differential Revision: https://reviews.llvm.org/D50096
llvm-svn: 338521
Summary:
The VS compiler (on Windows) has a bug which results in fieldFromInstruction being optimized out in some circumstances. This only happens in *release no debug info* builds that have assertions *turned off* - in all other situations the function is not inlined, so the functionality is correct. All of the bots have assertions turned on, so this path is not regularly tested. The workaround is to not inline the function on Windows - if the bug is fixed in a later release of the VS compiler, the noinline specification can be removed.
The test that consistently reproduces this is Lanai v11.txt test.
Reviewers: asmith, labath, zturner
Subscribers: dblaikie, stella.stamenova, aprantl, JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D49753
llvm-svn: 337942
Function `expandCheckImmOperand` should always check if the input machine
instruction is passed by reference before calling method `getOperand()` on it.
Found while working on a patch that relies on `expandCheckImmOperand` to expand
a scheduling predicate.
llvm-svn: 337294
Currently, any attempt to define a PatFrag involving any floating-point
only (or vector only) node causes a hard assertion failure in TableGen
if the current target does not have any floating-point (or vector)
types.
This is annoying if you want to provide convenience fragments in common
code (e.g. include/llvm/Target/TargetSelectionDAG.td) that is parsed on
all platforms, including those that miss such types.
But really, there's no reason not accept this when parsing the fragment
-- of course it would be an error for such a target to actually *use*
such a fragment anywhere, but as long as it doesn't, I think TableGen
shouldn't error out.
The immediate cause of the assertion failure is the test inside the
ValidateOnExit destructor. This patch simply disables that check while
infering types during parsing of pattern fragments (only).
Reviewed By: hfinkel, kparzysz
Differential Revision: https://reviews.llvm.org/D48887
llvm-svn: 337023
isSubsetOf() could be very slow if the hierarchy of the RegisterClasses
of the target is very complicated.
This is mainly caused by the fact that isSubset() is called
multiple times over the same SuperClass of a register class
if this ends up being the super class of a register class
from multiple paths.
Differential Revision: https://reviews.llvm.org/D49124
llvm-svn: 337020
This patch adds support for AArch64 to cfi-verify.
This required three changes to cfi-verify. First, it generalizes checking if an instruction is a trap by adding a new isTrap flag to TableGen (and defining it for x86 and AArch64). Second, the code that ensures that the operand register is not clobbered between the CFI check and the indirect call needs to allow a single dereference (in x86 this happens as part of the jump instruction). Third, we needed to ensure that return instructions are not counted as indirect branches. Technically, returns are indirect branches and can be covered by CFI, but LLVM's forward-edge CFI does not protect them, and x86 does not consider them, so we keep that behavior.
In addition, we had to improve AArch64's code to evaluate the branch target of a MCInst to handle calls where the destination is not the first operand (which it often is not).
Differential Revision: https://reviews.llvm.org/D48836
llvm-svn: 337007
A TableGen instruction record usually contains a DAG pattern that will
describe the SelectionDAG operation that can be implemented by this
instruction. However, there will be cases where several different DAG
patterns can all be implemented by the same instruction. The way to
represent this today is to write additional patterns in the Pattern
(or usually Pat) class that map those extra DAG patterns to the
instruction. This usually also works fine.
However, I've noticed cases where the current setup seems to require
quite a bit of extra (and duplicated) text in the target .td files.
For example, in the SystemZ back-end, there are quite a number of
instructions that can implement an "add-with-overflow" operation.
The same instructions also need to be used to implement just plain
addition (simply ignoring the extra overflow output). The current
solution requires creating extra Pat pattern for every instruction,
duplicating the information about which particular add operands
map best to which particular instruction.
This patch enhances TableGen to support a new PatFrags class, which
can be used to encapsulate multiple alternative patterns that may
all match to the same instruction. It operates the same way as the
existing PatFrag class, except that it accepts a list of DAG patterns
to match instead of just a single one. As an example, we can now define
a PatFrags to match either an "add-with-overflow" or a regular add
operation:
def z_sadd : PatFrags<(ops node:$src1, node:$src2),
[(z_saddo node:$src1, node:$src2),
(add node:$src1, node:$src2)]>;
and then use this in the add instruction pattern:
defm AR : BinaryRRAndK<"ar", 0x1A, 0xB9F8, z_sadd, GR32, GR32>;
These SystemZ target changes are implemented here as well.
Note that PatFrag is now defined as a subclass of PatFrags, which
means that some users of internals of PatFrag need to be updated.
(E.g. instead of using PatFrag.Fragment you now need to use
!head(PatFrag.Fragments).)
The implementation is based on the following main ideas:
- InlinePatternFragments may now replace each original pattern
with several result patterns, not just one.
- parseInstructionPattern delays calling InlinePatternFragments
and InferAllTypes. Instead, it extracts a single DAG match
pattern from the main instruction pattern.
- Processing of the DAG match pattern part of the main instruction
pattern now shares most code with processing match patterns from
the Pattern class.
- Direct use of main instruction patterns in InferFromPattern and
EmitResultInstructionAsOperand is removed; everything now operates
solely on DAG match patterns.
Reviewed by: hfinkel
Differential Revision: https://reviews.llvm.org/D48545
llvm-svn: 336999
The aim of this backend is to output everything TableGen knows about
the record set, similarly to the default -print-records backend. But
where -print-records produces output in TableGen's input syntax
(convenient for humans to read), this backend produces it as
structured JSON data, which is convenient for loading into standard
scripting languages such as Python, in order to extract information
from the data set in an automated way.
The output data contains a JSON representation of the variable
definitions in output 'def' records, and a few pieces of metadata such
as which of those definitions are tagged with the 'field' prefix and
which defs are derived from which classes. It doesn't dump out
absolutely every piece of knowledge it _could_ produce, such as type
information and complicated arithmetic operator nodes in abstract
superclasses; the main aim is to allow consumers of this JSON dump to
essentially act as new backends, and backends don't generally need to
depend on that kind of data.
The new backend is implemented as an EmitJSON() function similar to
all of llvm-tblgen's other EmitFoo functions, except that it lives in
lib/TableGen instead of utils/TableGen on the basis that I'm expecting
to add it to clang-tblgen too in a future patch.
To test it, I've written a Python script that loads the JSON output
and tests properties of it based on comments in the .td source - more
or less like FileCheck, except that the CHECK: lines have Python
expressions after them instead of textual pattern matches.
Reviewers: nhaehnle
Reviewed By: nhaehnle
Subscribers: arichardson, labath, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D46054
llvm-svn: 336771
The vast number of added instructions for SVE causes TableGen to fail with an assertion:
Assertion `Delta < 65536U && "disassembler decoding table too large!"'
This patch increases the number of supported decoder fix-ups.
Reviewers: dmgreen, stoklund, petpav01
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D48937
llvm-svn: 336334
Add registers still missing after r328016 (D43353):
- for bits 15-8 of SI, DI, BP, SP (*H), and R8-R15 (*BH),
- for bits 31-16 of R8-R15 (*WH).
Thanks to Craig Topper for pointing it out.
llvm-svn: 336134
Implements PR34259
Intrinsics.h is a very popular header. Most LLVM TUs care about things
like dbg_value, but they don't care how they are implemented. After I
split these out, IntrinsicImpl.inc is 1.7 MB, so this saves each LLVM TU
from scanning 1.7 MB of source that gets pre-processed away.
It also means we can modify intrinsic properties without triggering a
full rebuild, but that's probably less of a win.
I think the next best thing to do would be to split out the target
intrinsics into their own header. Very, very few TUs care about
target-specific intrinsics. It's very hard to split up the target
independent intrinsics like llvm.expect, assume, and dbg.value, though.
llvm-svn: 335407
Summary:
This is essentially a rewrite of the backend which introduces TableGen
base classes GenericEnum, GenericTable, and SearchIndex. They allow
generating custom enums and tables with lookup functions using
separately defined records as the underlying database.
Also added as part of this change:
- Lookup functions may use indices composed of multiple fields.
- Instruction fields are supported similar to Intrinsic fields.
- When the lookup key has contiguous numeric values, the lookup
function will directly index into the table instead of using a binary
search.
The existing SearchableTable functionality is internally mapped to the
new primitives.
Change-Id: I444f3490fa1dbfb262d7286a1660a2c4308e9932
Reviewers: arsenm, tra, t.p.northover
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D48013
llvm-svn: 335225