This restores the previous behavior of not including the mnemonic in the classes table for every target that starts instruction lines with the mnemonic. Not only did the table size increase by 1 entry, but the class enum increased in size which caused every class in the array to increase in size. It also grew the size of the function that parsers tokens into classes by a substantial amount.
This adds a new HasMnemonicFirst flag to all AsmParsers. It's set to 1 by default and Hexagon target overrides it to 0.
For the X86 target alone this recovers 324KB of size on the llvm-mc executable.
I believe the current state is still a bad design choice for the Hexagon target as it causes most of the parsing to do a linear search through the entire match table to comparing operands against every instruction until it finds one that works. At least for the other targets we do a binary search based on mnemonic over which to do the linear scan.
llvm-svn: 256669
This reapplies r256277 with two changes:
- In emitFnAttrCompatCheck, change FuncName's type to std::string to fix
a use-after-free bug.
- Remove an unnecessary install-local target in lib/IR/Makefile.
Original commit message for r252949:
Provide a way to specify inliner's attribute compatibility and merging
rules using table-gen. NFC.
This commit adds new classes CompatRule and MergeRule to Attributes.td,
which are used to generate code to check attribute compatibility and
merge attributes of the caller and callee.
rdar://problem/19836465
llvm-svn: 256304
For targets to add their own operand types as needed, as advertised in
Operand's comment, they need to be able to specify an alternate namespace
for OperandType names too. This matches the RegisterOperand class.
llvm-svn: 256299
This reapplies r252990 and r252949. I've added member function getKind
to the Attr classes which returns the enum or string of the attribute.
Original commit message for r252949:
Provide a way to specify inliner's attribute compatibility and merging
rules using table-gen. NFC.
This commit adds new classes CompatRule and MergeRule to Attributes.td,
which are used to generate code to check attribute compatibility and
merge attributes of the caller and callee.
rdar://problem/19836465
llvm-svn: 256277
Part 1 was submitted in http://reviews.llvm.org/D15134.
Changes in this part:
* X86RegisterInfo.td, X86RecognizableInstr.cpp: Add FR128 register class.
* X86CallingConv.td: Pass f128 values in XMM registers or on stack.
* X86InstrCompiler.td, X86InstrInfo.td, X86InstrSSE.td:
Add instruction selection patterns for f128.
* X86ISelLowering.cpp:
When target has MMX registers, configure MVT::f128 in FR128RegClass,
with TypeSoftenFloat action, and custom actions for some opcodes.
Add missed cases of MVT::f128 in places that handle f32, f64, or vector types.
Add TODO comment to support f128 type in inline assembly code.
* SelectionDAGBuilder.cpp:
Fix infinite loop when f128 type can have
VT == TLI.getTypeToTransformTo(Ctx, VT).
* Add unit tests for x86-64 fp128 type.
Differential Revision: http://reviews.llvm.org/D11438
llvm-svn: 255558
AsmWriterEmitter will generate a getRegisterName function with an alternate
register name index as its second argument if the target makes use of them. The
enum of these values is generated in RegisterInfoEmitter. The getRegisterName
generator would assume the namespace could always be found by reading index 1
of the list of AltNameIndices, but this will fail if this list is sorted such
that the NoRegAltName is at index 1. Because this list is sorted by record name
(in CodeGenTarget::ReadRegAltNameIndices), you only run in to problems if your
MyTargetRegisterInfo.td defines a single RegAltNameIndex that sorts lexically
before NoRegAltName.
For example, if a target has something like
def AnAltNameIndex : RegAltNameIndex
and defines RegAltNameIndices for some registers then, prior to this change,
AsmWriterEmitter would generate references to
::AnAltNameIndex and ::NoRegAltName
Patch by Alex Bradbury!
llvm-svn: 255344
The Statistical Profiling Extension is an optional extension to
ARMv8.2-A. Since it is an optional extension, I have added the
FeatureSPE subtarget feature to control it. The assembler-visible parts
of this extension are the new "psb csync" instruction, which is
equivalent to "hint #17", and a number of system registers.
Differential Revision: http://reviews.llvm.org/D15021
llvm-svn: 254401
Duplicate a few common definitions between DFAPacketizer.cpp and
DFAPacketizerEmitter.cpp to avoid including files from CodeGen
in TableGen.
llvm-svn: 253820
Extended DFA tablegen to:
- added "-debug-only dfa-emitter" support to llvm-tblgen
- defined CVI_PIPE* resources for the V60 vector coprocessor
- allow specification of multiple required resources
- supports ANDs of ORs
- e.g. [SLOT2, SLOT3], [CVI_MPY0, CVI_MPY1] means:
(SLOT2 OR SLOT3) AND (CVI_MPY0 OR CVI_MPY1)
- added support for combo resources
- allows specifying ORs of ANDs
- e.g. [CVI_XLSHF, CVI_MPY01] means:
(CVI_XLANE AND CVI_SHIFT) OR (CVI_MPY0 AND CVI_MPY1)
- increased DFA input size from 32-bit to 64-bit
- allows for a maximum of 4 AND'ed terms of 16 resources
- supported expressions now include:
expression => term [AND term] [AND term] [AND term]
term => resource [OR resource]*
resource => one_resource | combo_resource
combo_resource => (one_resource [AND one_resource]*)
Author: Dan Palermo <dpalermo@codeaurora.org>
kparzysz: Verified AMDGPU codegen to be unchanged on all llc
tests, except those dealing with instruction encodings.
Reapply the previous patch, this time without circular dependencies.
llvm-svn: 253793
Extended DFA tablegen to:
- added "-debug-only dfa-emitter" support to llvm-tblgen
- defined CVI_PIPE* resources for the V60 vector coprocessor
- allow specification of multiple required resources
- supports ANDs of ORs
- e.g. [SLOT2, SLOT3], [CVI_MPY0, CVI_MPY1] means:
(SLOT2 OR SLOT3) AND (CVI_MPY0 OR CVI_MPY1)
- added support for combo resources
- allows specifying ORs of ANDs
- e.g. [CVI_XLSHF, CVI_MPY01] means:
(CVI_XLANE AND CVI_SHIFT) OR (CVI_MPY0 AND CVI_MPY1)
- increased DFA input size from 32-bit to 64-bit
- allows for a maximum of 4 AND'ed terms of 16 resources
- supported expressions now include:
expression => term [AND term] [AND term] [AND term]
term => resource [OR resource]*
resource => one_resource | combo_resource
combo_resource => (one_resource [AND one_resource]*)
Author: Dan Palermo <dpalermo@codeaurora.org>
kparzysz: Verified AMDGPU codegen to be unchanged on all llc
tests, except those dealing with instruction encodings.
llvm-svn: 253790
We use to have an odd difference among MapVector and SetVector. The map
used a DenseMop, but the set used a SmallSet, which in turn uses a
std::set.
I have changed SetVector to use a DenseSet. If you were depending on the
old behaviour you can pass an explicit set type or use SmallSetVector.
The common cases for needing to do it are:
* Optimizing for small sets.
* Sets for types not supported by DenseSet.
llvm-svn: 253439
Allowing imprecise lane masks in case of more than 32 sub register lanes
lead to some tricky corner cases, and I need another bugfix for another
one. Instead I rather declare lane masks as precise and let tablegen
abort if we do not have enough bits.
This does not affect any in-tree target, even AMDGPU only needs 16 lanes
at the moment. If the 32 lanes turn out to be a problem in the future,
then we can easily change the LaneBitmask typedef to uint64_t.
Differential Revision: http://reviews.llvm.org/D14557
llvm-svn: 253279
MCSubtargetInfo in the subclasses into MCTargetAsmParser and define a
member function getSTI.
This is done in preparation for making changes to shrink the size of
MCRelaxableFragment. (see http://reviews.llvm.org/D14346).
llvm-svn: 253124
This reapplies r252949. I've changed the type of FuncName to be
std::string instead of StringRef in emitFnAttrCompatCheck.
Original commit message for r252949:
Provide a way to specify inliner's attribute compatibility and merging
rules using table-gen. NFC.
This commit adds new classes CompatRule and MergeRule to Attributes.td,
which are used to generate code to check attribute compatibility and
merge attributes of the caller and callee.
rdar://problem/19836465
llvm-svn: 252990
rules using table-gen. NFC.
This commit adds new classes CompatRule and MergeRule to Attributes.td,
which are used to generate code to check attribute compatibility and
merge attributes of the caller and callee.
rdar://problem/19836465
llvm-svn: 252949
This is a step towards consolidating some of the information regarding
attributes in a single place.
This patch moves the enum attributes in Attributes.h to the table-gen
file. Additionally, it adds definitions of target independent string
attributes that will be used in follow-up commits by the inliner to
check attribute compatibility.
rdar://problem/19836465
llvm-svn: 252796
There is no need to generate separate table for intrinsics mod ref behaviour.
It can now be determined purely from function attributes.
Differential Revision: http://reviews.llvm.org/D13917
llvm-svn: 251040
Catchret transfers control from a catch funclet to an earlier funclet.
However, it is not completely clear which funclet the catchret target is
part of. Make this clear by stapling the catchret target's funclet
membership onto the CATCHRET SDAG node.
llvm-svn: 249052
Previously the code added an extra nullptr entry to a static array and then created an ArrayRef with a size one less than the static array. If there were no other entries the array would just contain the nullptr and the ArrayRef would be crated with size 0.
Instead, put the right number of entries in the array and explicitly emit 'None' if the size would be 0. This allows the static array constructor of makeArrayRef to be used.
llvm-svn: 248244
Summary:
This is the first patch in the series to migrate Triple's (which are ambiguous)
to TargetTuple's (which aren't).
For the moment, TargetTuple simply passes all requests to the Triple object it
holds. Once it has replaced Triple, it will start to implement the interface in
a more suitable way.
This change makes some changes to the public C++ API. In particular,
InitMCSubtargetInfo(), createMCRelocationInfo(), and createMCSymbolizer()
now take TargetTuples instead of Triples. The other public C++ API's have
been left as-is for the moment to reduce patch size.
This commit also contains a trivial patch to clang to account for the C++ API
change. Thanks go to Pavel Labath for fixing LLDB for me.
Reviewers: rengolin
Subscribers: jyknight, dschuff, arsenm, rampitec, danalbert, srhines, javed.absar, dsanders, echristo, emaste, jholewinski, tberghammer, ted, jfb, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D10969
llvm-svn: 247692
Summary:
This is the first patch in the series to migrate Triple's (which are ambiguous)
to TargetTuple's (which aren't).
For the moment, TargetTuple simply passes all requests to the Triple object it
holds. Once it has replaced Triple, it will start to implement the interface in
a more suitable way.
This change makes some changes to the public C++ API. In particular,
InitMCSubtargetInfo(), createMCRelocationInfo(), and createMCSymbolizer()
now take TargetTuples instead of Triples. The other public C++ API's have
been left as-is for the moment to reduce patch size.
This commit also contains a trivial patch to clang to account for the C++ API
change.
Reviewers: rengolin
Subscribers: jyknight, dschuff, arsenm, rampitec, danalbert, srhines, javed.absar, dsanders, echristo, emaste, jholewinski, tberghammer, ted, jfb, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D10969
llvm-svn: 247683
Summary: This fixes a variety of typos in docs, code and headers.
Subscribers: jholewinski, sanjoy, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D12626
llvm-svn: 247495
Except the changes that defined virtual destructors as =default, because that
ran into problems with GCC 4.7 and overriding methods that weren't noexcept.
llvm-svn: 247298
Summary:
Add the necessary plumbing so that llvm_token_ty can be used as an
argument/return type in intrinsic definitions and correspondingly require
TokenTy in function types. TokenTy is an opaque type that has no target
lowering, but can be used in machine-independent intrinsics. It is
required for the upcoming llvm.eh.padparam intrinsic.
Reviewers: majnemer, rnk
Subscribers: stoklund, llvm-commits
Differential Revision: http://reviews.llvm.org/D12532
llvm-svn: 246651
I locally hit the 255 limit, but a lot of these are redundant: each
predicate coming from a different record was allocated a new number,
even when we already emitted the same code for another predicate.
Instead, re-use numbers and emit the predicate code only once.
This reduces the total text size of *DAGISel.cpp.o by ~1%.
llvm-svn: 246208
After r244870 flush() will only compare two null pointers and return,
doing nothing but wasting run time. The call is not required any more
as the stream and its SmallString are always in sync.
Thanks to David Blaikie for reviewing.
llvm-svn: 244928
preparation for de-coupling the AA implementations.
In order to do this, they had to become fake-scoped using the
traditional LLVM pattern of a leading initialism. These can't be actual
scoped enumerations because they're bitfields and thus inherently we use
them as integers.
I've also renamed the behavior enums that are specific to reasoning
about the mod/ref behavior of functions when called. This makes it more
clear that they have a very narrow domain of applicability.
I think there is a significantly cleaner API for all of this, but
I don't want to try to do really substantive changes for now, I just
want to refactor the things away from analysis groups so I'm preserving
the exact original design and just cleaning up the names, style, and
lifting out of the class.
Differential Revision: http://reviews.llvm.org/D10564
llvm-svn: 242963
This patch does the following:
* Fix FIXME on `needsStackRealignment`: it is now shared between multiple targets, implemented in `TargetRegisterInfo`, and isn't `virtual` anymore. This will break out-of-tree targets, silently if they used `virtual` and with a build error if they used `override`.
* Factor out `canRealignStack` as a `virtual` function on `TargetRegisterInfo`, by default only looks for the `no-realign-stack` function attribute.
Multiple targets duplicated the same `needsStackRealignment` code:
- Aarch64.
- ARM.
- Mips almost: had extra `DEBUG` diagnostic, which the default implementation now has.
- PowerPC.
- WebAssembly.
- x86 almost: has an extra `-force-align-stack` option, which the default implementation now has.
The default implementation of `needsStackRealignment` used to just return `false`. My current patch changes the behavior by simply using the above shared behavior. This affects:
- AMDGPU
- BPF
- CppBackend
- MSP430
- NVPTX
- Sparc
- SystemZ
- XCore
- Out-of-tree targets
This is a breaking change! `make check` passes.
The only implementation of the `virtual` function (besides the slight different in x86) was Hexagon (which did `MF.getFrameInfo()->getMaxAlignment() > 8`), and potentially some out-of-tree targets. Hexagon now uses the default implementation.
`needsStackRealignment` was being overwritten in `<Target>GenRegisterInfo.inc`, to return `false` as the default also did. That was odd and is now gone.
Reviewers: sunfish
Subscribers: aemerson, llvm-commits, jfb
Differential Revision: http://reviews.llvm.org/D11160
llvm-svn: 242727
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.
This patch is quite boring overall, except for some uglyness in
ASMPrinter which has a getDataLayout function but has some clients
that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so
some methods are taking a DataLayout as parameter.
Reviewers: echristo
Subscribers: yaron.keren, rafael, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D11090
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242386
When FixedLenDecoder matches an input bitpattern of form [01]+ with an
instruction bitpattern of form [01?]+ (where 0/1 are static bits and ? are
mixed/variable bits) it passes the input bitpattern to a specific instruction
decoder method which then makes a final decision whether the bitpattern is a
valid instruction or not. This means the decoder must handle all possible
values of the variable bits which sometimes leads to opcode rewrites in the
decoder method when the instructions are not fully orthogonal.
The patch provides a way for the decoder method to say that when it returns
Fail it does not necessarily mean the bitpattern is invalid, but rather that
the bitpattern is definitely not an instruction that is recognized by the
decoder method. The decoder can then try to match the input bitpattern with
other possible instruction bitpatterns.
For example, this allows to solve a situation on AArch64 where the `MSR
(immediate)` instruction has form:
1101 0101 0000 0??? 0100 ???? ???1 1111
but not all values of the ? bits are allowed. The rejected values should be
handled by the `extended MSR (register)` instruction:
1101 0101 000? ???? ???? ???? ???? ????
The decoder will first try to decode an input bitpattern that matches both
bitpatterns as `MSR (immediate)` but currently this puts the decoder method of
`MSR (immediate)` into a situation when it must be able to decode all possible
values of the ? bits, i.e. it would need to rewrite the instruction to `MSR
(register)` when it is not `MSR (immediate)`.
The patch allows to specify that the decoder method cannot determine if the
instruction is valid for all variable values. The decoder method can simply
return Fail when it knows it is definitely not `MSR (immediate)`. The decoder
will then backtrack the decoding and find that it can match the input
bitpattern with the more generic `MSR (register)` bitpattern too.
Differential Revision: http://reviews.llvm.org/D7174
llvm-svn: 242274
In this patch I have only encoding. Intrinsics and DAG lowering will be in the next patch.
I temporary removed the old intrinsics test (just to split this patch).
Half types are not covered here.
Differential Revision: http://reviews.llvm.org/D11134
llvm-svn: 242023
Force all creators of `MCSubtargetInfo` to immediately initialize it,
merging the default constructor and the initializer into an initializing
constructor. Besides cleaning up the code a little, this makes it clear
that the initializer is never called again later.
Out-of-tree backends need a trivial change: instead of calling:
auto *X = new MCSubtargetInfo();
InitXYZMCSubtargetInfo(X, ...);
return X;
they should call:
return createXYZMCSubtargetInfoImpl(...);
There's no real functionality change here.
llvm-svn: 241957
Summary:
The target frame lowering's concrete type is always known in RegisterInfo, yet it's only sometimes devirtualized through a static_cast. This change adds an auto-generated static function <Target>GenRegisterInfo::getFrameLowering(const MachineFunction &MF) which does this devirtualization, and uses this function in all targets which can.
This change was suggested by sunfish in D11070 for WebAssembly, I figure that I may as well improve the other targets while I'm here.
Subscribers: sunfish, ted, llvm-commits, jfb
Differential Revision: http://reviews.llvm.org/D11093
llvm-svn: 241921
Summary:
Initially, these intrinsics seemed like part of a family of "frame"
related intrinsics, but now I think that's more confusing than helpful.
Initially, the LangRef specified that this would create a new kind of
allocation that would be allocated at a fixed offset from the frame
pointer (EBP/RBP). We ended up dropping that design, and leaving the
stack frame layout alone.
These intrinsics are really about sharing local stack allocations, not
frame pointers. I intend to go further and add an `llvm.localaddress()`
intrinsic that returns whatever register (EBP, ESI, ESP, RBX) is being
used to address locals, which should not be confused with the frame
pointer.
Naming suggestions at this point are welcome, I'm happy to re-run sed.
Reviewers: majnemer, nicholas
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11011
llvm-svn: 241633
represented by uint64_t, this patch replaces these
usages with the FeatureBitset (std::bitset) type.
Differential Revision: http://reviews.llvm.org/D10542
llvm-svn: 241058
This commit implements serialization of the register mask machine
operands. This commit serializes only the call preserved register
masks that are defined by a target, it doesn't serialize arbitrary
register masks.
This commit also extends the TargetRegisterInfo class and TableGen so that
the users of TRI can get the list of all the call preserved register masks and
their names.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10673
llvm-svn: 240966
Before this we were producing a TargetExternalSymbol from a MCSymbol.
That meant extracting the symbol name and fetching the symbol again
down the pipeline.
This patch adds a DAG.getMCSymbol that lets the MCSymbol pass unchanged on the
DAG.
Doing so removes the need for MO_NOPREFIX and fixes the root cause of pr23900,
allowing r240130 to be committed again.
llvm-svn: 240300
Summary:
This instruction encodes a loading operation that may fault, and a label
to branch to if the load page-faults. The locations of potentially
faulting loads and their "handler" destinations are recorded in a
FaultMap section, meant to be consumed by LLVM's clients.
Nothing generates FAULTING_LOAD_OP instructions yet, but they will be
used in a future change.
The documentation (FaultMaps.rst) needs improvement and I will update
this diff with a more expanded version shortly.
Depends on D10196
Reviewers: rnk, reames, AndyAyers, ab, atrick, pgavlin
Reviewed By: atrick, pgavlin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10197
llvm-svn: 239740
Summary:
This continues the patch series to eliminate StringRef forms of GNU triples
from the internals of LLVM that began in r239036.
Reviewers: rafael
Reviewed By: rafael
Subscribers: rafael, ted, jfb, llvm-commits, rengolin, jholewinski
Differential Revision: http://reviews.llvm.org/D10311
llvm-svn: 239467
If the type isn't trivially moveable emplace can skip a potentially
expensive move. It also saves a couple of characters.
Call sites were found with the ASTMatcher + some semi-automated cleanup.
memberCallExpr(
argumentCountIs(1), callee(methodDecl(hasName("push_back"))),
on(hasType(recordDecl(has(namedDecl(hasName("emplace_back")))))),
hasArgument(0, bindTemporaryExpr(
hasType(recordDecl(hasNonTrivialDestructor())),
has(constructExpr()))),
unless(isInTemplateInstantiation()))
No functional change intended.
llvm-svn: 238602
Fixes PR23455, where, when TableGen generates the matcher from the
AsmString, it splits "cmp${cc}ss" into tokens, and the "ss" suffix
is recognized as the SS register.
I can't think of a situation where that's a feature, not a bug, hence:
when a token is "isolated", i.e., it is followed and preceded by
separators, it shouldn't be parsed as a register.
Differential Revision: http://reviews.llvm.org/D9844
llvm-svn: 238536
shrinking the Size and NumDefs fields to offset the size growth, and
reordering the fields to preserve a good packing.
This is necessary in the short term for adding a convergent flag, and
simultaneously future-proofs us against more flags being added in the
future.
llvm-svn: 238445
If there is an InstAlias defined for an instruction that had a custom
converter (AsmMatchConverter), then when the alias is matched,
the custom converter will be used rather than the converter generated
by the InstAlias.
This patch adds the UseInstAsmMatchConverter field to the InstAlias
class, which allows you to override this behavior and force the
converter generated by the InstAlias to be used.
This is required for some future improvemnts to the R600 assembler.
Differential Revision: http://reviews.llvm.org/D9083
llvm-svn: 238210
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
The first several times this was committed (e.g. r229831, r233055), it caused several buildbot failures.
Apparently the reason for most failures was both clang and gcc's inability to deal with large numbers (> 10K) of bitset constructor calls in tablegen-generated initializers of instruction info tables.
This should now be fixed.
llvm-svn: 238192
in POWER8:
vadduqm
vaddeuqm
vaddcuq
vaddecuq
vsubuqm
vsubeuqm
vsubcuq
vsubecuq
In addition to adding the instructions themselves, it also adds support for the
v1i128 type for intrinsics (Intrinsics.td, Function.cpp, and
IntrinsicEmitter.cpp).
http://reviews.llvm.org/D9081
llvm-svn: 238144
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
The first two times this was committed (r229831, r233055), it caused several buildbot failures.
At least some of the ARM and MIPS ones were due to gcc/binutils issues, and should now be fixed.
llvm-svn: 237234
cleanups.
Also, change code in tablegen which printed a message and then called
"exit(1)" to use PrintFatalError, instead.
This fixes instances where an empty output file was left behind after
a failed tablegen invocation, which would confuse subsequent ninja
runs into not attempting to rebuild.
Differential Revision: http://reviews.llvm.org/D9608
llvm-svn: 237058
The v1i128 type is needed for the quadword add/substract instructions introduced
in POWER8. Futhermore, the PowerPC ABI specifies that parameters of type v1i128
are to be passed in a single vector register, while parameters of type i128 are
passed in pairs of GPRs. Thus, it is necessary to be able to differentiate
between v1i128 and i128 in LLVM.
http://reviews.llvm.org/D8564
llvm-svn: 235198
The patch is generated using clang-tidy misc-use-override check.
This command was used:
tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py \
-checks='-*,misc-use-override' -header-filter='llvm|clang' \
-j=32 -fix -format
http://reviews.llvm.org/D8925
llvm-svn: 234679
Summary:
The loop which emits AssemblerPredicate conditions also links them together by emitting a '&&'.
If the 1st predicate is not an AssemblerPredicate, while the 2nd one is, nothing gets emitted for the 1st one, but we still emit the '&&' because of the 2nd predicate.
This generated code looks like "( && Cond2)" and is invalid.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: dsanders, llvm-commits
Differential Revision: http://reviews.llvm.org/D8294
llvm-svn: 234312
Specify an allocation order with a register class. This is used by register
allocators with a greedy heuristic. This is usefull as it is sometimes
beneficial to color more constrained classes first.
Differential Revision: http://reviews.llvm.org/D8626
llvm-svn: 233743
per-function subtarget.
Currently, code-gen passes the default or generic subtarget to the constructors
of MCInstPrinter subclasses (see LLVMTargetMachine::addPassesToEmitFile), which
enables some targets (AArch64, ARM, and X86) to change their instprinter's
behavior based on the subtarget feature bits. Since the backend can now use
different subtargets for each function, instprinter has to be changed to use the
per-function subtarget rather than the default subtarget.
This patch takes the first step towards enabling instprinter to change its
behavior based on the per-function subtarget. It adds a bit "PassSubtarget" to
AsmWriter which tells table-gen to pass a reference to MCSubtargetInfo to the
various print methods table-gen auto-generates.
I will follow up with changes to instprinters of AArch64, ARM, and X86.
llvm-svn: 233411
This reverts commit r233055.
It still causes buildbot failures (gcc running out of memory on several platforms, and a self-host failure on arm), although less than the previous time.
llvm-svn: 233068
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
The first time this was committed (r229831), it caused several buildbot failures.
At least some of the ARM ones were due to gcc/binutils issues, and should now be fixed.
Differential Revision: http://reviews.llvm.org/D8542
llvm-svn: 233055
This is needed for AVX512 masked scatter/gather support.
The R600 change is necessary to remove a hack that was working around the lack of multiple results.
llvm-svn: 232798
Some subregisters are only to indicate different access sizes, while not
providing any way to actually divide the register up into multiple
disjunct parts. Avoid tracking subregister liveness in these cases as it
is not beneficial.
Differential Revision: http://reviews.llvm.org/D8429
llvm-svn: 232695
When calculating the lanemask of a register class we have to include the
masks of subregisters supported by any of the class members, not just
the ones supported by all class members.
This fixes problems when coalescing towards a subclass with additional
subregisters available.
The attached testcase works fine as is, but does crash if you enable
subregister liveness on x86 without this change applied.
llvm-svn: 232652
Now that SmallString is a first-class citizen, most SmallString::str()
calls are not required. This patch removes a whole bunch of them, yet
there are lots more.
There are two use cases where str() is really needed:
1) To use one of StringRef member functions which is not available in
SmallString.
2) To convert to std::string, as StringRef implicitly converts while
SmallString do not. We may wish to change this, but it may introduce
ambiguity.
llvm-svn: 232622
clang-cl would warn that this value is not representable in 'int':
enum { FeatureX = 1ULL << 31 };
All MS enums are 'ints' unless otherwise specified, so we have to use an
explicit type. The AMDGPU target just hit 32 features, triggering this
warning.
Now that we have C++11 strong enum types, we can also eliminate the
'const uint64_t' codepath from tablegen and just use 'enum : uint64_t'.
llvm-svn: 231697
This is what all the targets check for and is consistent with the
initialized value of MissingFeatures, which is sometimes assinged
to ErrorInfo.
llvm-svn: 231397
This should help with the AVX512 masked gather changes Elena is working on. This patch is derived from some of the changes Elena made to tablegen, but modified by me to support arbitrary number of results.
llvm-svn: 231357
Accidentally committed a few more of these cleanup changes than
intended. Still breaking these out & tidying them up.
This reverts commit r231135.
llvm-svn: 231136
There doesn't seem to be any need to assert that iterator assignment is
between iterators over the same node - if you want to reuse an iterator
variable to iterate another node, that's perfectly acceptable. Just
don't mix comparisons between iterators into disjoint sequences, as
usual.
llvm-svn: 231135
All of the cases were just appending from random access iterators to a
vector. Using insert/append can grow the vector to the perfect size
directly and moves the growing out of the loop. No intended functionalty
change.
llvm-svn: 230845
The keys of the map are unique by pointer address, so there's no need
to use the llvm::less comparator. This allows us to use DenseMap
instead, which reduces tblgen time by 20% on my stress test.
llvm-svn: 230769
Gather and scatter instructions additionally write to one of the source operands - mask register.
In this case Gather has 2 destination values - the loaded value and the mask.
Till now we did not support code gen pattern for gather - the instruction was generated from
intrinsic only and machine node was hardcoded.
When we introduce the masked_gather node, we need to select instruction automatically,
in the standard way.
I added a flag "hasTwoExplicitDefs" that allows to handle 2 destination operands.
(Some code in the X86InstrFragmentsSIMD.td is commented out, just to split one big
patch in many small patches)
llvm-svn: 230471
Everyone except R600 was manually passing the length of a static array
at each callsite, calculated in a variety of interesting ways. Far
easier to let ArrayRef handle that.
There should be no functional change, but out of tree targets may have
to tweak their calls as with these examples.
llvm-svn: 230118
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
Differential Revision: http://reviews.llvm.org/D7065
llvm-svn: 229831
Gather and Scatter are new introduced intrinsics, comming after recently implemented masked load and store.
This is the first patch for Gather and Scatter intrinsics. It includes only the syntax, parsing and verification.
Gather and Scatter intrinsics allow to perform multiple memory accesses (read/write) in one vector instruction.
The intrinsics are not target specific and will have the following syntax:
Gather:
declare <16 x i32> @llvm.masked.gather.v16i32(<16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1> <mask>, <16 x i32> <passthru>)
declare <8 x float> @llvm.masked.gather.v8f32(<8 x float*><vector of ptrs>, i32 <alignment>, <8 x i1> <mask>, <8 x float><passthru>)
Scatter:
declare void @llvm.masked.scatter.v8i32(<8 x i32><vector value to be stored> , <8 x i32*><vector of ptrs> , i32 <alignment>, <8 x i1> <mask>)
declare void @llvm.masked.scatter.v16i32(<16 x i32> <vector value to be stored> , <16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1><mask> )
Vector of ptrs - a set of source/destination addresses, to load/store the value.
Mask - switches on/off vector lanes to prevent memory access for switched-off lanes
vector of ptrs, value and mask should have the same vector width.
These are code examples where gather / scatter should be used and will allow function vectorization
;void foo1(int * restrict A, int * restrict B, int * restrict C) {
; for (int i=0; i<SIZE; i++) {
; A[i] = B[C[i]];
; }
;}
;void foo3(int * restrict A, int * restrict B) {
; for (int i=0; i<SIZE; i++) {
; A[B[i]] = i+5;
; }
;}
Tests will come in the following patches, with CodeGen and Vectorizer.
http://reviews.llvm.org/D7433
llvm-svn: 228521
Similar to the C++14 void specializations of these templates, useful as
a stop-gap until LLVM switches to '14.
Example use-cases in tblgen because I saw some functors that looked like
they could be simplified/refactored.
Reviewers: dexonsmith
Differential Revision: http://reviews.llvm.org/D7324
llvm-svn: 227828
The hot path through this region of code does lots of batch inserts into sets. By storing them as sorted arrays, we can defer the sorting to the end of the batch, which is dramatically more efficient. This reduces tblgen runtime by 25% on my worst-case target.
llvm-svn: 227682
This is a continuation of my prior work to move some of the inner workings for CodeGenRegister to use bit vectors when computing about register units. This is highly beneficial to TableGen runtime on targets with large, dense register files. This patch represents a ~40% runtime reduction over and above my earlier improvement on a stress test of this case.
llvm-svn: 227678
For target descriptions with very large and very dense register files, TableGen
can take an extremely long time to run. This change makes a dent in that (~15%
in my measurements) by accelerating the single hottest operation with better data
structures.
I believe there's still a lot of room to make this even faster with more global
changes that require replacing some of the existing datastructures in this area
with bit vectors, but that's a more involved change and I wanted to get this
simpler improvement in first.
llvm-svn: 227562
derived classes.
Since global data alignment, layout, and mangling is often based on the
DataLayout, move it to the TargetMachine. This ensures that global
data is going to be layed out and mangled consistently if the subtarget
changes on a per function basis. Prior to this all targets(*) have
had subtarget dependent code moved out and onto the TargetMachine.
*One target hasn't been migrated as part of this change: R600. The
R600 port has, as a subtarget feature, the size of pointers and
this affects global data layout. I've currently hacked in a FIXME
to enable progress, but the port needs to be updated to either pass
the 64-bitness to the TargetMachine, or fix the DataLayout to
avoid subtarget dependent features.
llvm-svn: 227113
Specifically, gc.result benefits from this greatly. Instead of:
gc.result.int.*
gc.result.float.*
gc.result.ptr.*
...
We now have a gc.result.* that can specialize to literally any type.
Differential Revision: http://reviews.llvm.org/D7020
llvm-svn: 226857
This patch was generated by a clang tidy checker that is being open sourced.
The documentation of that checker is the following:
/// The emptiness of a container should be checked using the empty method
/// instead of the size method. It is not guaranteed that size is a
/// constant-time function, and it is generally more efficient and also shows
/// clearer intent to use empty. Furthermore some containers may implement the
/// empty method but not implement the size method. Using empty whenever
/// possible makes it easier to switch to another container in the future.
Patch by Gábor Horváth!
llvm-svn: 226161
This adds support for creating an InstAlias with a negative immediate, i.e.:
def NOT : InstAlias<"not $dst, $src", (XORI GR32:$dst, GR32:$src, -1)>;
by resolving this problem:
RISCVGenAsmMatcher.inc:95:11: error: expected '= constant-expression' or end of enumerator definition
CVT_imm_-1,
^^^^^^^^^^
Patch by Jordy Potman, thanks!
llvm-svn: 226073
These intrinsics allow multiple functions to share a single stack
allocation from one function's call frame. The function with the
allocation may only perform one allocation, and it must be in the entry
block.
Functions accessing the allocation call llvm.recoverframeallocation with
the function whose frame they are accessing and a frame pointer from an
active call frame of that function.
These intrinsics are very difficult to inline correctly, so the
intention is that they be introduced rarely, or at least very late
during EH preparation.
Reviewers: echristo, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D6493
llvm-svn: 225746
This adds two new fields to the RegisterOperand TableGen class:
string OperandNamespace = "MCOI";
string OperandType = "OPERAND_REGISTER";
These fields can be used to specify a target specific operand type,
which will be stored in the OperandType member of the MCOperandInfo
object.
This can be useful for targets that need to store some extra information
about operands that cannot be expressed using the target independent
types. For example, in the R600 backend, there are operands which
can take either registers or immediates and it is convenient to be able
to specify this in the TableGen definitions.
llvm-svn: 225661
Requires new AsmParserOperand types that detect 16-bit and 32/64-bit mode so that we choose the right instruction based on default sizing without predicates. This is necessary since predicates mess up the disassembler table building.
llvm-svn: 225256
This is necessary to allow the disassembler to be able to handle AdSize32 instructions in 64-bit mode when address size prefix is used.
Eventually we should probably also support 'addr32' and 'addr16' in the assembler to override the address size on some of these instructions. But for now we'll just use special operand types that will lookup the current mode size to select the right instruction.
llvm-svn: 225075
This removes a hardcoded list of instructions in the CodeEmitter. Eventually I intend to remove the predicates on the affected instructions since in any given mode two of them are valid if we supported addr32/addr16 prefixes in the assembler.
llvm-svn: 224809
An instruction alias defined with InstAlias and an optional operand in the
middle of the AsmString field, "..${a} <operands>", would get the final
"}" printed in the instruction disassembly. This wouldn't happen if the optional
operand appeared as the last item in the AsmString which is how the current
backends avoided the problem.
There don't appear to be any tests for this part of Tablegen but it passes the
pre-commit tests. Manually tested the change by enabling the generic alias
printer in the ARM backend and checking the output.
Differential Revision: http://reviews.llvm.org/D6529
llvm-svn: 224348
On X86, the Intel asm parser tries to match all memory operand sizes when
none is explicitly specified. For LEA, which doesn't really have a memory
operand (just a pointer one), this results in multiple successful matches,
one for each memory size. There's no error because it's same opcode, so
really, it's just one match. However, the tablegen'd matcher function
adds opcode/operands to the passed MCInst, and this results in multiple
duplicated operands.
This commit clears the MCInst in the tablegen'd matcher function.
We sometimes clear it when the match failed, so there's no expectation of
keeping the previous content anyway.
Differential Revision: http://reviews.llvm.org/D6670
llvm-svn: 224347
Clang's static analyzer found several potential cases of undefined
behavior, use of un-initialized values, and potentially null pointer
dereferences in tablegen, Support, MC, and ADT. This cleans them up
with specific assertions on the assumptions of the code.
llvm-svn: 224154
Let tablegen compute the combination of subregister lanemasks for all
subregisters in a register/register class. This is preparation for further
work subregister allocation
llvm-svn: 223873
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.
Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
llvm-svn: 223348
This complicates a few algorithms due to not having random access, but
not by a huge degree I don't think (open to debate/design
discussion/etc).
llvm-svn: 223261
This is the second patch in a small series. This patch contains the MachineInstruction and x86-64 backend pieces required to lower Statepoints. It does not include the code to actually generate the STATEPOINT machine instruction and as a result, the entire patch is currently dead code. I will be submitting the SelectionDAG parts within the next 24-48 hours. Since those pieces are by far the most complicated, I wanted to minimize the size of that patch. That patch will include the tests which exercise the functionality in this patch. The entire series can be seen as one combined whole in http://reviews.llvm.org/D5683.
The STATEPOINT psuedo node is generated after all gc values are explicitly spilled to stack slots. The purpose of this node is to wrap an actual call instruction while recording the spill locations of the meta arguments used for garbage collection and other purposes. The STATEPOINT is modeled as modifing all of those locations to prevent backend optimizations from forwarding the value from before the STATEPOINT to after the STATEPOINT. (Doing so would break relocation semantics for collectors which wish to relocate roots.)
The implementation of STATEPOINT is closely modeled on PATCHPOINT. Eventually, much of the code in this patch will be removed. The long term plan is to merge the functionality provided by statepoints and patchpoints. Merging their implementations in the backend is likely to be a good starting point.
Reviewed by: atrick, ributzka
llvm-svn: 223085
Order matters for this container, it seems (using a forward_list and
replacing the original push_backs with emplace_fronts caused test
failures). I didn't look too deeply into why.
(& in retrospect, I might go back & change some of the forward_lists I
introduced to deques anyway - since most don't require removal, deque is
a more memory-friendly data structure (moderate locality while not
invalidating pointers))
llvm-svn: 222950
Seems libstdc++ on some buildbots is lacking std::map::emplace, which is
weird... reverting while I look into it.
This reverts commit r222937.
llvm-svn: 222939
Pointers and references to map elements are never invalidated (except on
removal, which isn't used here) so there's no need for the indirection
unless there's polymorphism at work.
A little const correctness had to be fixed, since the indirection
allowed some benign const violations.
llvm-svn: 222937
This reverts commit r222632 (and follow-up r222636), which caused a host
of LNT failures on an internal bot. I'll respond to the commit on the
list with a reproduction of one of the failures.
Conflicts:
lib/Target/X86/X86TargetTransformInfo.cpp
llvm-svn: 222936
Since the elements were not polymorphic, the unique_ptr was only used to
avoid pointer invalidation on container resizes - might as well skip the
indirection and use a container with suitable invalidation semantics.
llvm-svn: 222931
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
llvm-svn: 222632
Primarily done by using SequenceToOffsetTable to reduce the register pressure set tables and then sizing the indices into the tables appropriately. Size a few other table entries based on content as well. Reduces X86RegisterInfo.o by ~9k.
llvm-svn: 222621
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.
This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...
llvm-svn: 222334
Having two ways to do this doesn't seem terribly helpful and
consistently using the insert version (which we already has) seems like
it'll make the code easier to understand to anyone working with standard
data structures. (I also updated many references to the Entry's
key and value to use first() and second instead of getKey{Data,Length,}
and get/setValue - for similar consistency)
Also removes the GetOrCreateValue functions so there's less surface area
to StringMap to fix/improve/change/accommodate move semantics, etc.
llvm-svn: 222319
StringSet is still a bit dodgy in that it exposes the raw iterator of
the StringMap parent, which exposes the weird detail that StringSet
actually has a 'value'... but anyway, this is useful for a handful of
clients that want to reference the newly inserted/persistent string data
in the StringSet/Map/Entry/thing.
llvm-svn: 222302
This reverts commit r222183.
Broke on the MSVC buildbots due to MSVC not producing default move
operations - I'd fix it immediately but just broke my build system a
bit, so backing out until I have a chance to get everything going again.
llvm-svn: 222187
The next step is to actually use unique_ptr in TreePatternNode's
Children vector. That will be more intrusive, and may not work,
depending on exactly how these things are handled (I have a bad
suspicion things are shared more than they should be, making this more
DAG than tree - but if it's really a tree, unique_ptr should suffice)
llvm-svn: 222183
Indices into the table are stored in each MCRegisterClass instead of a pointer. A new method, getRegClassName, is added to MCRegisterInfo and TargetRegisterInfo to lookup the string in the table.
llvm-svn: 222118
based on instruction complexity
The order that tablegen fast-isel instruction code is generated is
currently based on the text of the predicate (using string
less-than). This patch changes this to instead use the instruction
complexity. Because the complexities are not unique a C++ multimap is
used instead of a map.
This fixes the problem where code with no predicate always comes out
first (the empty string always compares as less than all other
strings) thus making the code with predicates dead code. See the FMUL
code in PPCFastISel.cpp for an example. It also more closely matches
the normal codegen ordering. Some error checking in the tablegen
fast-isel code is fixed as well.
Patch by Bill Seurer.
llvm-svn: 222038
The problem is mostly that variadic output instruction
aren't handled, so it is rejected for having an inconsistent
number of operands, and then the right number of operands
isn't emitted.
llvm-svn: 221117
Summary:
CustomCallingConv is simply a CallingConv that tablegen should not generate the
implementation for. It allows regular CallingConv's to delegate to these custom
functions. This is (currently) necessary for Mips and we cannot use CCCustom
without having to adapt to the different API that CCCustom uses.
This brings us a bit closer to being able to remove
MipsCC::analyzeCallOperands and MipsCC::analyzeFormalArguments in favour of
the common implementation.
No functional change to the targets.
Depends on D3341
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: vmedic, llvm-commits
Differential Revision: http://reviews.llvm.org/D5965
llvm-svn: 221052
FastISel has a fixed set of virtual functions that are overridden by the
tablegen-generated code for each target. These functions are distinguished by
the kinds of operands, e.g., register + immediate = "ri". The FastISel emitter
has been blindly emitting functions with different combinations of operand
kinds, even for combinations that are completely unused by FastISel, e.g.,
"fastEmit_rrr". Change to filter out functions that will be irrelevant for
FastISel and do not bother generating the code for them. Also add explicit
"override" keywords for the virtual functions that are overridden.
llvm-svn: 218838
No functionality change intended.
This implements Elena's idea to put the new additionalOperand outside the
switch to cover all cases
(http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140929/237763.html).
Note only nontrivial change is in MRMSrcMemFrm. This requires an inclusive
interval of [2, 4] because we have prefix-dependent *optional* immediate
operand.
llvm-svn: 218790