This is another step towards implementing register classes with
parametrized register/spill sizes and value types.
This is an updated version of r298652. The difference is that MCRegister-
Class still contains register size, available as getPhysRegSize(). The
old function getSize was retained as a temporary measure to avoid build
breakage for out-of-tree targets.
llvm-svn: 298739
This is another step towards implementing register classes with
parametrized register/spill sizes.
Differential Revision: https://reviews.llvm.org/D31299
llvm-svn: 298652
Summary:
This class is a list of AttributeSetNodes corresponding the function
prototype of a call or function declaration. This class used to be
called ParamAttrListPtr, then AttrListPtr, then AttributeSet. It is
typically accessed by parameter and return value index, so
"AttributeList" seems like a more intuitive name.
Rename AttributeSetImpl to AttributeListImpl to follow suit.
It's useful to rename this class so that we can rename AttributeSetNode
to AttributeSet later. AttributeSet is the set of attributes that apply
to a single function, argument, or return value.
Reviewers: sanjoy, javed.absar, chandlerc, pete
Reviewed By: pete
Subscribers: pete, jholewinski, arsenm, dschuff, mehdi_amini, jfb, nhaehnle, sbc100, void, llvm-commits
Differential Revision: https://reviews.llvm.org/D31102
llvm-svn: 298393
These are needed due to some obscure rules in the standard
about how std::vector selects between copy and move
constructors, which can cause a conforming implementation
to attempt to select the copy constructor of RuleMatcher,
which will fail since std::unique_ptr<> isn't copyable.
llvm-svn: 298294
Summary:
Prepare the way for nested instruction matching support by having actions
like CopyRenderer look up operands in the RuleMatcher rather than a
specific InstructionMatcher. This allows actions to reference any operand
from any matched instruction.
It works by checking the 'shape' of the match and capturing
each matched instruction to a local variable. If the shape is wrong
(not enough operands, leaf nodes where non-leafs are expected, etc.), then
the rule exits early without checking the predicates. Once we've captured
the instructions, we then test the predicates as before (except using the
local variables). If the match is successful, then we render the new
instruction as before using the local variables.
It's not noticable in this patch but by the time we support multiple
instruction matching, this patch will also cause a significant improvement
to readability of the emitted code since
MRI.getVRegDef(I->getOperand(0).getReg()) will simply be MI1 after
emitCxxCaptureStmts().
This isn't quite NFC because I've also fixed a bug that I'm surprised we
haven't encountered yet. It now checks there are at least the expected
number of operands before accessing them with getOperand().
Depends on D30531
Reviewers: t.p.northover, qcolombet, aditya_nandakumar, ab, rovka
Reviewed By: rovka
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D30535
llvm-svn: 298257
Summary:
Adds a new kind of MachineOperand: MO_Placeholder.
This operand must not appear in the MIR and only exists as a way of
creating an 'uninitialized' operand until a matcher function overwrites it.
Depends on D30046, D29712
Reviewers: t.p.northover, ab, rovka, aditya_nandakumar, javed.absar, qcolombet
Reviewed By: qcolombet
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D30089
llvm-svn: 297782
Currently we don't enforce that ISD::ANY_EXTEND, ZERO_EXTEND, SIGN_EXTEND, TRUNC, FP_ROUND, FP_EXTEND have the same number of elements(including scalar) between their input and output. Though we have them documented as such. Up until a few months ago x86 created nodes that violated this rule. That's all been fixed now, and we should enforce the rule going forward.
In order to do this we need to allow SDTCisSameNumEltsAs to support scalar types and not enforce being a vector. If one type is scalar we will force the other type to also be scalar.
Differential Revision: https://reviews.llvm.org/D30878
llvm-svn: 297648
There were some issues in the implementation of enumerate()
preventing it from being used in various contexts. These were
all related to the fact that it did not supporter llvm's
iterator_facade_base class. So this patch adds support for that
and additionally exposes a new helper method to_vector() that
will evaluate an entire range and store the results in a
vector.
Differential Revision: https://reviews.llvm.org/D30853
llvm-svn: 297633
This reverts r297596.
There were other issues that were making this not work that have been fixed now. Reverting this results in a more accurate table.
llvm-svn: 297602
This exposed that we have several intrinsic instructions that have identical TSFlags to other instructions. We should merge their patterns and kill of the duplicate. I'll fix that in a follow up patch.
llvm-svn: 297596
SelectCode has been returning nullptr since 182dac0 ("SDAG: Make
SelectCodeCommon return void", 2016-05-10). Make SelectCode also
return void instead, as all callers have been updated.
Patch by Sven van Haastregt.
Review: https://reviews.llvm.org/D30497
llvm-svn: 297377
Summary:
This will allow future patches to inspect the details of the LLT. The implementation is now split between
the Support and CodeGen libraries to allow TableGen to use this class without introducing layering concerns.
Thanks to Ahmed Bougacha for finding a reasonable way to avoid the layering issue and providing the version of this patch without that problem.
The problem with the previous commit appears to have been that TableGen was including CodeGen/LowLevelType.h instead of Support/LowLevelTypeImpl.h.
Reviewers: t.p.northover, qcolombet, rovka, aditya_nandakumar, ab, javed.absar
Subscribers: arsenm, nhaehnle, mgorny, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D30046
llvm-svn: 297241
More module problems. This time it only showed up in the stage 2 compile of
clang-x86_64-linux-selfhost-modules-2 but not the stage 1 compile.
Somehow, this change causes the build to need Attributes.gen before it's been
generated.
llvm-svn: 297188
Summary:
This will allow future patches to inspect the details of the LLT. The implementation is now split between
the Support and CodeGen libraries to allow TableGen to use this class without introducing layering concerns.
Thanks to Ahmed Bougacha for finding a reasonable way to avoid the layering issue and providing the version of this patch without that problem.
Reviewers: t.p.northover, qcolombet, rovka, aditya_nandakumar, ab, javed.absar
Subscribers: arsenm, nhaehnle, mgorny, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D30046
llvm-svn: 297177
X86EvexToVex machine instruction pass compresses EVEX encoded instructions by replacing them with their identical VEX encoded instructions when possible.
It uses manually supported 2 large tables that map the EVEX instructions to their VEX ideticals.
This TableGen backend replaces the tables by automatically generating them.
Differential Revision: https://reviews.llvm.org/D30451
llvm-svn: 297127
Extra const in the StringRef argument meant that MSVC complained about it not correctly overriding from OperandPredicateMatcher::emitCxxPredicateExpr (which didn't have the const)
llvm-svn: 296138
Summary:
This isn't testable for AArch64 by itself so this patch also adds
support for constant immediates in the pattern and physical
register uses in the result.
The new IntOperandMatcher matches the constant in patterns such as
'(set $rd:GPR32, (G_XOR $rs:GPR32, -1))'. It's always safe to fold
immediates into an instruction so this is the first rule that will match
across multiple BB's.
The Renderer hierarchy is responsible for adding operands to the result
instruction. Renderers can copy operands (CopyRenderer) or add physical
registers (in particular %wzr and %xzr) to the result instruction
in any order (OperandMatchers now import the operand names from
SelectionDAG to allow renderers to access any operand). This allows us to
emit the result instruction for:
%1 = G_XOR %0, -1 --> %1 = ORNWrr %wzr, %0
%1 = G_XOR -1, %0 --> %1 = ORNWrr %wzr, %0
although the latter is untested since the matcher/importer has not been
taught about commutativity yet.
Added BuildMIAction which can build new instructions and mutate them where
possible. W.r.t the mutation aspect, MatchActions are now told the name of
an instruction they can recycle and BuildMIAction will emit mutation code
when the renderers are appropriate. They are appropriate when all operands
are rendered using CopyRenderer and the indices are the same as the matcher.
This currently assumes that all operands have at least one matcher.
Finally, this change also fixes a crash in
AArch64InstructionSelector::select() caused by an immediate operand
passing isImm() rather than isCImm(). This was uncovered by the other
changes and was detected by existing tests.
Depends on D29711
Reviewers: t.p.northover, ab, qcolombet, rovka, aditya_nandakumar, javed.absar
Reviewed By: rovka
Subscribers: aemerson, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D29712
llvm-svn: 296131
The 'Kind' member used in RTTI for InstructionPredicateMatcher was not
initialized but went undetected since I always ended up with the correct value.
llvm-svn: 296126
Summary:
This makes more important rules have priority over less important rules.
For example, '%a = G_ADD $b:s64, $c:s64' has priority over
'%a = G_ADD $b:s32, $c:s32'. Previously these rules were emitted in the
correct order by chance.
NFC in this patch but it is required to make the next patch work correctly.
Depends on D29710
Reviewers: t.p.northover, ab, qcolombet, aditya_nandakumar, rovka
Reviewed By: ab, rovka
Subscribers: javed.absar, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D29711
llvm-svn: 296121
Summary:
Each OperandPredicateMatcher shouldn't need to know how to generate the expression
to reference a MachineOperand. The OperandMatcher should provide it.
In addition to separating responsibilities, this also lays some groundwork for
decoupling source patterns from destination patterns to allow invented operands
or operands provided by GlobalISel's equivalent to the ComplexPattern<> class.
Depends on D29709
Reviewers: t.p.northover, ab, rovka, qcolombet, aditya_nandakumar
Reviewed By: ab
Subscribers: dberris, kristof.beyls, llvm-commits, igorb
Differential Revision: https://reviews.llvm.org/D29710
llvm-svn: 295668
Summary:
In the near future the rules will be sorted between these two steps to
ensure that more important rules are not prevented by less important ones.
Reviewers: t.p.northover, ab, rovka, qcolombet, aditya_nandakumar
Reviewed By: ab
Subscribers: dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D29709
llvm-svn: 295661
Add WIG value to all of AVX instructions which ignore the W-bit in their encoding, instead of giving them the default value of 0.
This patch is needed for a follow up work on EVEX2VEX pass (replacing EVEX encoded instructions with their corresponding VEX version when possible).
Differential Revision: https://reviews.llvm.org/D29876
llvm-svn: 295643
To help assist in debugging ISEL or to prioritize GlobalISel backend
work, this patch adds two more tables to <Target>GenISelDAGISel.inc -
one which contains the patterns that are used during selection and the
other containing include source location of the patterns
Enabled through CMake varialbe LLVM_ENABLE_DAGISEL_COV
llvm-svn: 295081
Instead of emitting the matcher code directly, return the rule matcher
and the skip reason as an Expected<RuleMatcher>.
This will let us record all matchers and process them before emission.
It's a somewhat unconventional use of Error, but it's nicer than, say,
std::pair, because of the bool conversions.
Differential Revision: https://reviews.llvm.org/D29743
llvm-svn: 294706
Inside an alias group, when ordering instruction aliases, we rely
on the priority field to sort them.
When the priority is not set or more generally when there is a tie between
two aliases, we used to rely on the lexicographic order. However, this
order can change for the anonymous records when more instruction, intrinsic,
etc. are inserted.
For instance, given two anonymous records r1 and r2 with respective name
A_999 and A_1000, their lexicography order will be r2 then r1. Now, if
an instruction is added before them, their name will become respectively
A_1000 and A_1001, thus the lexicography order will be r1 then r2, i.e.,
it changed.
If that happens in an alias group, the assembly output would prefer a
different alias for no apparent good reasons.
A way to fix that is to use proper priority for all aliases, but we
can also make the tie breaker comparison smarter and use a deterministic
ordering. This is what this patch does.
llvm-svn: 294695
This patch checks the number of operands in the resulting
instruction instead of just the alias, then skips over
tied operands when generating the printing method.
This allows us to generate the preferred assembly syntax
for the AArch64 'ins' instruction, which should always be
displayed as 'mov' according to the ARMARM.
Several unit tests have changed as a result, but only to
reflect the preferred disassembly.
Some other InstAlias patterns (movk/bic/orr) needed a
slight adjustment to stop them becoming the default
and breaking other unit tests.
Patch by Graham Hunter.
Differential Revision: https://reviews.llvm.org/D29219
llvm-svn: 294437
Summary:
The Mips target is the only user of mnemonicIsValid. This patch
moves this method from AsmMatcherEmitter.cpp to MipsAsmParser.cpp,
getting rid of the method in all other targets where it generated
warnings about an unused function.
Patch by Gonsolo.
Reviewers: craig.topper
Reviewed By: craig.topper
Subscribers: sdardis
Differential Revision: https://reviews.llvm.org/D28748
llvm-svn: 294400
Refactor a helper function, FactorNodes, to search for a push node in constant space. This resolves a problem in a not-yet-upstreamed backend where a recursive pattern blew the call stack (at a depth of 255) under a debug build of tablegen. No functional change so no new test coverage. The change is minimal to avoid disturbing existing behaviour.
Differential Revision: https://reviews.llvm.org/D29080
llvm-svn: 294230
This lets us split out PatternToMatch from the top-level RuleMatcher,
where it doesn't really belong. That, in turn, lets us eventually
generate RuleMatchers from non-SelectionDAG sources.
llvm-svn: 294076
Use the qualified name for StringLiteral (llvm::StringLiteral) when
generating the sources. This is needed as the generated files may be
used out-of-tree (e.g. swift) where you may not have a
`using namespace llvm;` resulting in an undefined lookup.
llvm-svn: 293577
Tablegen emitted a warning when the fast isel emitter created dead
code by emitting a pattern that has no predicate before a pattern
that has one.
This should be an error but was originally only a warning because the X86
backend had a buggy definition that unintentionally caused this to be hit
(PR21575). That has been fixed a while ago (r222094), so it's safe to
upgrade the warning to an error.
llvm-svn: 293534
Summary:
AMDGPU has two register classes with the same set of registers, and this
was causing this tablegen backend would get stuck in infinite recursion.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: tpr, wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D29049
llvm-svn: 293483
We had various variants of defining dump() functions in LLVM. Normalize
them (this should just consistently implement the things discussed in
http://lists.llvm.org/pipermail/cfe-dev/2014-January/034323.html
For reference:
- Public headers should just declare the dump() method but not use
LLVM_DUMP_METHOD or #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
- The definition of a dump method should look like this:
#if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
LLVM_DUMP_METHOD void MyClass::dump() {
// print stuff to dbgs()...
}
#endif
llvm-svn: 293359
I think the initial version of r293172 was trying:
std::forward<Args...>(args)...
which doesn't compile. This seems like the correct way:
std::forward<Args>(args)...
llvm-svn: 293214
Summary:
This should make it possible to easily add everything needed to import all
the existing SelectionDAG rules. It should also serve the likely
kinds of GlobalISel rules (some of which are not currently representable
in SelectionDAG) once we've nailed down the tablegen definition for that.
The hierarchy is as follows:
MatcherRule - A matching rule. Currently used to emit C++ ISel code but will
| also be used to emit test cases and tablegen definitions in the
| near future.
|- Instruction(s) - Represents the instruction to be matched.
|- Instruction Predicate(s) - Test the opcode, arithmetic flags, etc. of an
| instruction.
\- Operand(s) - Represents a particular operand of the instruction. In the
| future, there may be subclasses to test the same predicates
| on multiple operands (including for variadic instructions).
\ Operand Predicate(s) - Test the type, register bank, etc. of an operand.
This is where the ComplexPattern equivalent
will be represented. It's also
nested-instruction matching will live as a
predicate that follows the DefUse chain to the
Def and tests a MatcherRule from that position.
Support for multiple instruction matchers in a rule has been retained from
the existing code but has been adjusted to assert when it is used.
Previously it would silently drop all but the first instruction matcher.
The tablegen-erated file is not functionally changed but has more
parentheses and no longer attempts to format the if-statements since
keeping track of the indentation is tricky in the presence of the matcher
hierarchy. It would be nice to have CMakes tablegen() run the output
through clang-format (when available) so we don't have to complicate
TableGen with pretty-printing.
It's also worth mentioning that this hierarchy will also be able to emit
TableGen definitions and test cases in the near future. This is the reason
for favouring explicit emit*() calls rather than the << operator.
Reviewers: aditya_nandakumar, rovka, t.p.northover, qcolombet, ab
Reviewed By: ab
Subscribers: igorb, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D28942
llvm-svn: 293172
Summary:
Adds a RegisterBank tablegen class that can be used to declare the register
banks and an associated tablegen pass to generate the necessary code.
Changes since first commit attempt:
* Added missing guards
* Added more missing guards
* Found and fixed a use-after-free bug involving Twine locals
Reviewers: t.p.northover, ab, rovka, qcolombet
Reviewed By: qcolombet
Subscribers: aditya_nandakumar, rengolin, kristof.beyls, vkalintiris, mgorny, dberris, llvm-commits, rovka
Differential Revision: https://reviews.llvm.org/D27338
llvm-svn: 292478
Summary:
Adds a RegisterBank tablegen class that can be used to declare the register
banks and an associated tablegen pass to generate the necessary code.
Changes since last commit:
The new tablegen pass is now correctly guarded by LLVM_BUILD_GLOBAL_ISEL and
this should fix the buildbots however it may not be the whole fix. The previous
buildbot failures suggest there may be a memory bug lurking that I'm unable to
reproduce (including when using asan) or spot in the source. If they re-occur
on this commit then I'll need assistance from the bot owners to track it down.
Reviewers: t.p.northover, ab, rovka, qcolombet
Reviewed By: qcolombet
Subscribers: aditya_nandakumar, rengolin, kristof.beyls, vkalintiris, mgorny, dberris, llvm-commits, rovka
Differential Revision: https://reviews.llvm.org/D27338
llvm-svn: 292367
This patch fixes bugzilla 31576 (https://llvm.org/bugs/show_bug.cgi?id=31576).
"data32" instruction prefix was not defined in the llvm.
An exception had to be added to the X86 tablegen and AsmPrinter because both "data16" and "data32" are encoded to 0x66 (but in different modes).
Differential Revision: https://reviews.llvm.org/D28468
llvm-svn: 292352
Summary:
Adds a RegisterBank tablegen class that can be used to declare the register
banks and an associated tablegen pass to generate the necessary code.
Reviewers: t.p.northover, ab, rovka, qcolombet
Subscribers: aditya_nandakumar, rengolin, kristof.beyls, vkalintiris, mgorny, dberris, llvm-commits, rovka
Differential Revision: https://reviews.llvm.org/D27338
llvm-svn: 292132
We were frequently checking for a list of types and the different types
conveyed no real information. So lump them together explicitly.
llvm-svn: 292095
This adds a basic tablegen backend that analyzes the SelectionDAG
patterns to find simple ones that are eligible for GlobalISel-emission.
That's similar to FastISel, with one notable difference: we're not fed
ISD opcodes, so we need to map the SDNode operators to generic opcodes.
That's done using GINodeEquiv in TargetGlobalISel.td.
Otherwise, this is mostly boilerplate, and lots of filtering of any kind
of "complicated" pattern. On AArch64, this is sufficient to match G_ADD
up to s64 (to ADDWrr/ADDXrr) and G_BR (to B).
Differential Revision: https://reviews.llvm.org/D26878
llvm-svn: 290284
Specifically avoid implicit conversions from/to integral types to
avoid potential errors when changing the underlying type. For example,
a typical initialization of a "full" mask was "LaneMask = ~0u", which
would result in a value of 0x00000000FFFFFFFF if the type was extended
to uint64_t.
Differential Revision: https://reviews.llvm.org/D27454
llvm-svn: 289820
clang -target arm deprecated-asm.s -c
deprecated-asm.s:30:9: warning: use of SP or PC in the list is deprecated
stmia r4!, {r12-r14}
We have to have an option what can disable it.
Patched by Yin Ma!
Reviewers: joey, echristo, weimingz
Subscribers: llvm-commits, aemerson
Differential Revision: https://reviews.llvm.org/D27219
llvm-svn: 288734
Bring the sorting check back that I removed in r288655 but put it under
EXPENSIVE_CHECKS this time. Also document that this the check isn't
purely about having a sorted list but also about operator < having the
correct transitive behavior.
Apply the same to the other check in the file.
llvm-svn: 288693
A debug build of AsmMatcherEmitter would use a quadratic algorithm to
check whether std::stable_sort() actually sorted. Let's hope the authors
of our C++ standard library did that testing for us. Removing the check
gives a 3x speedup in the X86 case.
llvm-svn: 288655
Tablegen's -gen-instr-info pass has a bug in its emitEnums() routine.
The function intends for values in a vector to be deduplicated, but it
accidentally skips over elements after performing a deletion.
I think there are smarter ways of doing this deduplication, but we can
do that in a follow-up commit if there's interest. See the thread:
[PATCH] TableGen InstrMapping Bug fix.
Patch by Tyler Kenney!
llvm-svn: 288408
There were several cases in X86 where we were unable to fully factor a ScopeMatcher but created nested ScopeMatchers for some portions of it. Then we created a SwitchType that split it up and further factored it so that we ended up with something like this:
SwitchType
Scope
Scope
Sequence of matchers
Some other sequence of matchers
EndScope
Another sequence of matchers
EndScope
...Next type
This change turns it into this:
SwitchType
Scope
Sequence of matchers
Some other sequence of matchers
Another sequence of matchers
EndScope
...Next type
Several other in-tree targets had similar nested scopes like this. Overall this doesn't save many bytes, but makes the isel output a little more regular.
llvm-svn: 287624
Summary:
For Sparc the namespace (SP) is different from the target name (Sparc),
which causes the name of the array in this declaration to differ from
the name used in the definition.
Patch by Daniel Cederman.
Reviewers: jyknight
Subscribers: llvm-commits, jyknight
Differential Revision: https://reviews.llvm.org/D23650
llvm-svn: 287528
Previously we were factoring when the ScopeMatcher was initially created, but it might get more Matchers added to it later. Delay factoring until we have fully created/populated the ScopeMatchers.
This reduces X86 isel tables by 154 bytes.
llvm-svn: 287520
Summary:
* ARM is omitted from this patch because this check appears to expose bugs in this target.
* Mips is omitted from this patch because this check either detects bugs or deliberate
emission of instructions that don't satisfy their predicates. One deliberate
use is the SYNC instruction where the version with an operand is correctly
defined as requiring MIPS32 while the version without an operand is defined
as an alias of 'SYNC 0' and requires MIPS2.
* X86 is omitted from this patch because it doesn't use the tablegen-erated
MCCodeEmitter infrastructure.
Patches for ARM and Mips will follow.
Depends on D25617
Reviewers: tstellarAMD, jmolloy
Subscribers: wdng, jmolloy, aemerson, rengolin, arsenm, jyknight, nemanjai, nhaehnle, tstellarAMD, llvm-commits
Differential Revision: https://reviews.llvm.org/D25618
llvm-svn: 287439
Summary:
This change is preparation for a change that will allow targets to verify that the instructions
they emit meet the predicates they specify. This is useful to ensure that C++
legalization/lowering/instruction-selection doesn't incorrectly select code for a different
subtarget than intended. Such cases are not caught by the integrated assembler when emitting
instructions directly to an object file.
Reviewers: qcolombet
Subscribers: qcolombet, beanz, mgorny, llvm-commits, modocache
Differential Revision: https://reviews.llvm.org/D25614
llvm-svn: 286945
Suspected to be the cause of a sanitizer-windows bot failure:
Assertion failed: isImm() && "Wrong MachineOperand accessor", file C:\b\slave\sanitizer-windows\llvm\include\llvm/CodeGen/MachineOperand.h, line 420
llvm-svn: 286385
A relocatable immediate is either an immediate operand or an operand that
can be relocated by the linker to an immediate, such as a regular symbol
in non-PIC code.
Start using relocImm for 32-bit and 64-bit MOV instructions, and for operands
of type "imm32_su". Remove a number of now-redundant patterns.
Differential Revision: https://reviews.llvm.org/D25812
llvm-svn: 286384
2 new intrinsics covering AVX-512 compress/expand functionality.
This implementation includes syntax, DAG builder, operation lowering and tests.
Does not include: handling of illegal data types, codegen prepare pass and the cost model.
llvm-svn: 285876
As it stands, the OperandMatchResultTy is only included in the generated
header if there is custom operand parsing. However, almost all backends
make use of MatchOperand_Success and friends from OperandMatchResultTy for
e.g. parseRegister. This is a pain when starting an AsmParser for a new
backend that doesn't yet have custom operand parsing. Move the enum to
MCTargetAsmParser.h.
This patch is a prerequisite for D23563
Differential Revision: https://reviews.llvm.org/D23496
llvm-svn: 285705
The CodeGenSchedModels::checkCompleteness routine in TableGen/
CodeGenSchedule.cpp is supposed to verify for each processor
model that is marked as "complete" that it actually defines a
scheduling class for each instruction.
However, this did not work correctly due to an incorrect
check whether a scheduling class has an itinerary.
Reviewer: atrick
Differential revision: https://reviews.llvm.org/D26156
llvm-svn: 285622
The sanitizer-windows bot turned red with:
FAILED: utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/IntrinsicEmitter.cpp.obj
C:\PROGRA~2\MICROS~1.0\VC\bin\AMD64_~2\cl.exe ... -c
C:\...\llvm\utils\TableGen\IntrinsicEmitter.cpp
c:\...\llvm\utils\tablegen\intrinsicemitter.cpp(254) :
fatal error C1001: An internal error has occurred in the compiler.
http://lab.llvm.org:8011/builders/sanitizer-windows/builds/114/steps/build%20clang%20lld/logs/stdio
llvm-svn: 285089
Summary:
SetVector already used DenseSet, but SmallSetVector used std::set. This
leads to surprising performance differences. Moreover, it means that
the set of key types accepted by SetVector and SmallSetVector are
quite different!
In order to make this change, we had to convert some callsites that used
SmallSetVector<std::string, N> to use SmallSetVector<CachedHashString, N>
instead.
Reviewers: timshen
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25648
llvm-svn: 284887
-debug-only=subtarget-emitter prints a lot of machine model diagnostics.
This prunes the output so that the "No machine model for XXX on processor YYY"
only appears when there is definitely no machine model for that opcode.
Previously it was printing that error even if the opcode was covered by
a more general scheduling class.
<rdar://problem/15919845> [TableGen][CodeGenSchedule] Debug output does not help spotting the missing scheduling classes
llvm-svn: 284452
The core of the change is supposed to be NFC, however it also fixes
what I believe was an undefined behavior when calling:
va_start(ValueArgs, Desc);
with Desc being a StringRef.
Differential Revision: https://reviews.llvm.org/D25342
llvm-svn: 283671
descriptions now tag add instructions, and the Hexagon backend is using this to
identify loop induction statements.
Patch by Sam Parker and Sjoerd Meijer.
Differential Revision: https://reviews.llvm.org/D23601
llvm-svn: 281304
Summary:
This allows specifying instructions that are available only in specific assembler variant. If AsmVariantName is specified then instruction will be presented only in MatchTable for this variant. If not specified then assembler variants will be determined based on AsmString.
Also this allows splitting assembler match tables in same way as it is done in dissasembler.
Reviewers: ab, tstellarAMD, craig.topper, vpykhtin
Subscribers: wdng
Differential Revision: https://reviews.llvm.org/D24249
llvm-svn: 280952
This mapping is between pointers, which DenseMap is particularly good
at. Most targets aren't really affected, but if there's a lot of
subregister composition this can shave off a good chunk of time from
generating registers.
llvm-svn: 279875
This tries to keep all the ModRM memory and register forms in their own regions of the encodings. Hoping to make it simple on some of the switch statements that operate on these encodings.
llvm-svn: 279422
Summary:
This is one possible solution to the problem of ignoring constraints that Simon
raised in D21473 but it's a bit of a hack.
The integrated assembler currently ignores violations of the tied register
constraints when the operands involved in a tie are both present in the AsmText.
For example, 'dati $rs, $rt, $imm' with the '$rs = $rt' will silently replace
$rt with $rs. So 'dati $2, $3, 1' is processed as if the user provided
'dati $2, $2, 1' without any diagnostic being emitted.
This is difficult to solve properly because there are multiple parts of the
matcher that are silently forcing these constraints to be met. Tied operands are
rendered to instructions by cloning previously rendered operands but this is
unnecessary because the matcher was already instructed to render the operand it
would have cloned. This is also unnecessary because earlier code has already
replaced the MCParsedOperand with the one it was tied to (so the parsed input
is matched as if it were 'dati <RegIdx 2>, <RegIdx 2>, <Imm 1>'). As a result,
it looks like fixing this properly amounts to a rewrite of the tied operand
handling which affects all targets.
This patch however, merely inserts a checking hook just before the
substitution of MCParsedOperands and the Mips target overrides it. It's not
possible to accurately check the registers are the same this early (because
numeric registers haven't been bound to a register class yet) so it cheats a
bit and checks that the tokens that produced the operand are lexically
identical. This works because tied registers need to have the same register
class but it does have a flaw. It will reject 'dati $4, $a0, 1' for violating
the constraint even though $a0 ends up as the same register as $4.
Reviewers: sdardis
Subscribers: dsanders, llvm-commits, sdardis
Differential Revision: https://reviews.llvm.org/D21994
llvm-svn: 276867
They were all auto-incremented from 0 anyway, and I'm getting really annoying
conflicts and runtime failures when different people add more for GlobalISel
(and even when I'm refactoring my own patches).
NFC.
llvm-svn: 276204
This is for a situation where the encoding for a register may be
different depending on the specific operand. For some instructions,
we want to apply additional restrictions beyond the encoding's
constraints.
In AMDGPU some operands are VSrc_32, using the VS_32 pseudo register
class which accept VGPRs, SGPRs, or immediates in the encoding.
Some specific instructions with the same encoding operand do not want
to allow immediates or SGPRs, but the encoding format is different
in this case than a regular VGPR_32 operand.
This allows specifying the encoding should be treated the same
without introducing yet another dummy register class.
llvm-svn: 275929
This splits out the intrinsic table such that generic intrinsics come
first and target specific intrinsics are grouped by target. From here
we can find out which target an intrinsic is for or differentiate
between generic and target intrinsics.
The motivation here is to make it easier to move target specific
intrinsic handling out of generic code.
llvm-svn: 275575
Summary:
In this patch we implement the following parts of XRay:
- Supporting a function attribute named 'function-instrument' which currently only supports 'xray-always'. We should be able to use this attribute for other instrumentation approaches.
- Supporting a function attribute named 'xray-instruction-threshold' used to determine whether a function is instrumented with a minimum number of instructions (IR instruction counts).
- X86-specific nop sleds as described in the white paper.
- A machine function pass that adds the different instrumentation marker instructions at a very late stage.
- A way of identifying which return opcode is considered "normal" for each architecture.
There are some caveats here:
1) We don't handle PATCHABLE_RET in platforms other than x86_64 yet -- this means if IR used PATCHABLE_RET directly instead of a normal ret, instruction lowering for that platform might do the wrong thing. We think this should be handled at instruction selection time to by default be unpacked for platforms where XRay is not availble yet.
2) The generated section for X86 is different from what is described from the white paper for the sole reason that LLVM allows us to do this neatly. We're taking the opportunity to deviate from the white paper from this perspective to allow us to get richer information from the runtime library.
Reviewers: sanjoy, eugenis, kcc, pcc, echristo, rnk
Subscribers: niravd, majnemer, atrick, rnk, emaste, bmakam, mcrosier, mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D19904
llvm-svn: 275367
Ran clang-format to remove the namespace indentation, and stopped
repeating names in doc comments since I was updating every line
anyway.
llvm-svn: 274919
The way the named arguments for various system instructions are handled at the
moment has a few problems:
- Large-scale duplication between AArch64BaseInfo.h and AArch64BaseInfo.cpp
- That weird Mapping class that I have no idea what I was on when I thought
it was a good idea.
- Searches are performed linearly through the entire list.
- We print absolutely all registers in upper-case, even though some are
canonically mixed case (SPSel for example).
- The ARM ARM specifies sysregs in terms of 5 fields, but those are relegated
to comments in our implementation, with a slightly opaque hex value
indicating the canonical encoding LLVM will use.
This adds a new TableGen backend to produce efficiently searchable tables, and
switches AArch64 over to using that infrastructure.
llvm-svn: 274576
Summary:
This complements the earlier addition of IntrWriteMem and IntrWriteArgMem
LLVM intrinsic properties, see D18291.
Also start using the attribute for memset, memcpy, and memmove intrinsics,
and remove their special-casing in BasicAliasAnalysis.
Reviewers: reames, joker.eph
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D18714
llvm-svn: 274485
The previous code would always do 1 or 2 prefix compares;
explicitly only do one.
This speeds up debug -gen-asm-matcher by ~10% (e.g. X86: 40s -> 35s).
llvm-svn: 273583
Currently isComplete = 1 requires that every instruction must
be described, declared unsupported or marked as having no
scheduling information for a processor.
For some backends such as MIPS, this requirement entails
long regex lists of instructions that are unsupported.
This patch teaches Tablegen to skip over instructions that
are associated with unsupported feature when checking if the
scheduling model is complete.
Patch by: Daniel Sanders
Contributions by: Simon Dardis
Reviewers: MatzeB
Differential Reviewer: http://reviews.llvm.org/D20522
llvm-svn: 273551
looking for it along $PATH. This allows installs of LLVM tools outside of
$PATH to find the symbolizer and produce pretty backtraces if they crash.
llvm-svn: 272232
will appear after a blank. This assumption does not hold in the ARM
target.
Patch by: Roger Ferrer Ibanez
Differential Revision: http://reviews.llvm.org/D20234
llvm-svn: 271666
consider the Predicates attached to InstAlias when generating printAliasInstr.
This forces users of printAliasInstr to check those predicates beforehand.
This commit adds them in the condition set of the IAPrinter object.
Patch by: Roger Ferrer Ibanez
Differential Revision: http://reviews.llvm.org/D20233
llvm-svn: 271665
This means SelectCode unconditionally returns nullptr now. I'll follow
up with a change to make that return void as well, but it seems best
to keep that one very mechanical.
This is part of the work to have Select return void instead of an
SDNode *, which is in turn part of llvm.org/pr26808.
llvm-svn: 269136
Summary:
This change allows to specify "DefaultMethod" for optional operand (IsOptional = 1) in AsmOperandClass that return default value for operand. This is used in convertToMCInst to set default values in MCInst.
Previously if you wanted to set default value for operand you had to create custom converter method. With this change it is possible to use standard converters even when optional operands presented.
Reviewers: tstellarAMD, ab, craig.topper
Subscribers: jyknight, dsanders, arsenm, nhaustov, llvm-commits
Differential Revision: http://reviews.llvm.org/D18242
llvm-svn: 268726
Pattern predicates already appear to be emitted as far down as they can be. The optimization was making no changes on any in-tree target.
llvm-svn: 268705
This opcode never happens in practice, and yet the logic we have in
place to handle it would be undefined behaviour if we ever executed
it. Remove it rather than trying to refactor code that's never
reached.
llvm-svn: 268692
This function performs the reverse computation of
composeSubRegIndexLaneMask().
It will be used in the upcoming "DetectDeadLanes" pass.
llvm-svn: 267849
Previously using lanemasks on registers without any subregisters was not
well defined. This commit extends TargetRegisterInfo/tablegen to:
- Report a lanemask of 1 for regclasses without subregisters
- Do the right thing when mapping a 0/1 lanemask from a class without
subregisters into a class with subregisters in
TargetRegisterInfo::composeSubRegIndexLaneMasks().
This will be used in the upcoming "DetectDeadLanes" patch.
llvm-svn: 267848
Summary:
IntrReadWriteArgMem simply becomes IntrArgMemOnly.
So there are fewer intrinsic properties that express their orthogonality
better, and correspond more closely to the corresponding IR attributes.
Suggested by: Philip Reames
Reviewers: joker.eph, reames, tstellarAMD
Subscribers: jholewinski, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19291
llvm-svn: 267021
Summary:
LLVMAttribute has outlived its utility and is becoming a problem for C API users that what to use all the LLVM attributes. In order to help moving away from LLVMAttribute in a smooth manner, this diff introduce LLVMGetAttrKindIDInContext, which can be used instead of the enum values.
See D18749 for reference.
Reviewers: Wallbraker, whitequark, joker.eph, echristo, rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19081
llvm-svn: 266842
Summary:
This property is used to mark an intrinsic that only writes to memory, but
neither reads from memory nor has other side effects.
An example where this is useful is the llvm.amdgcn.buffer.store.format.*
intrinsic, which corresponds to a store instruction that goes through a special
buffer descriptor rather than through a plain pointer.
With this property, the intrinsic should still be handled as having side
effects at the LLVM IR level, but machine scheduling can make smarter
decisions.
Reviewers: tstellarAMD, arsenm, joker.eph, reames
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D18291
llvm-svn: 266826
Removed some unused headers, replaced some headers with forward class declarations.
Found using simple scripts like this one:
clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap'
Patch by Eugene Kosov <claprix@yandex.ru>
Differential Revision: http://reviews.llvm.org/D19219
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266595
This required changing several places to print VT enums as strings instead of raw ints since the proper method to use to print became ambiguous. This is probably an improvement anyway.
This also appears to save ~8K from an x86 self host build of llc.
llvm-svn: 266562
1) We need to add this flag prior to adding any other, in case the user has
specified a -fmodule-cache-path= flag in their custom CXXFLAGS. Such a flag
causes -Werror builds to fail, and thus all config checks fail, until we add
the corresponding -fmodules flag. The modules selfhost bot does this, for
instance.
2) Delete module maps that were putting .cpp files into modules.
3) Enable -fmodules-local-submodule-visibility, to get proper module
visibility rules applied across submodules of the same module. Disable
-fmodules for C builds, since that flag is not available there.
llvm-svn: 266502
Summary: LLVMAttribute has outlived its utility and is becoming a problem for C API users that what to use all the LLVM attributes. In order to help moving away from LLVMAttribute in a smooth manner, this diff introduce LLVMGetAttrKindIDInContext, which can be used instead of the enum values.
Reviewers: Wallbraker, whitequark, joker.eph, echristo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18749
llvm-svn: 266257
TableGen checks at compiletime that for scheduling models with
"CompleteModel = 1" one of the following holds:
- Is marked with the hasNoSchedulingInfo flag
- The instruction is a subclass of Sched
- There are InstRW definitions in the scheduling model
Typical steps necessary to complete a model:
- Ensure all pseudo instructions that are expanded before machine
scheduling (usually everything handled with EmitYYY() functions in
XXXTargetLowering).
- If a CPU does not support some instructions mark the corresponding
resource unsupported: "WriteRes<WriteXXX, []> { let Unsupported = 1; }".
- Add missing scheduling information.
Differential Revision: http://reviews.llvm.org/D17747
llvm-svn: 262384
This introduces a new flag that indicates that a specific instruction
will never be present when the MachineScheduler runs and therefore needs
no scheduling information.
This is in preparation for an upcoming commit which checks completeness
of a scheduling model when tablegen runs.
Differential Revision: http://reviews.llvm.org/D17728
llvm-svn: 262383
Previosy, if actual instruction have one of optional operands then other optional operands listed before this also should be presented.
For example instruction v_fract_f32 v0, v1, mul:2 have one optional operand - OMod and do not have optional operand clamp. Previously this was not allowed because clamp is listed before omod in AsmString:
string AsmString = "v_fract_f32$vdst, $src0_modifiers$clamp$omod";
Making this work required some hacks (both OMod and Clamp match classes have same PredicateMethod).
Now, if MatchInstructionImpl meets formal optional operand that is not presented in actual instruction it skips this formal operand and tries to match current actual operand with next formal.
Patch by: Sam Kolton
Review: http://reviews.llvm.org/D17568
[AMDGPU] Assembler: Check immediate types for several optional operands in predicate methods
With this change you should place optional operands in order specified by asm string:
clamp -> omod
offset -> glc -> slc -> tfe
Fixes for several tests.
Depends on D17568
Patch by: Sam Kolton
Review: http://reviews.llvm.org/D17644
llvm-svn: 262314
Currently you can't specify node properties like commutativity on
a PatFrag. If you want to create a PatFrag on a commutative node
with a hasOneUse predicate, this enables you to specify that the
PatFrag is also commutable.
llvm-svn: 260404
Summary:
This makes it possible to specify some operands as optional to the AsmMatcher.
Setting this field to true will prevent the AsmMatcher from emitting
'too few operands' errors when there are missing optional operands.
Reviewers: olista01, ab
Subscribers: nhaustov, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D15755
llvm-svn: 259913
Summary:
This patch adds a reserve call to an expensive function
(`llvm::LoadIntrinsics`), and may fix a few other low hanging
performance fruit (I've put them in comments for now, so we can
discuss).
**Motivation:**
As I'm sure other developers do, when I build LLVM, I build the entire
project with the same config (`Debug`, `MinSizeRel`, `Release`, or
`RelWithDebInfo`). However, the `Debug` config also builds llvm-tblgen
in `Debug` mode. Later build steps that run llvm-tblgen then can
actually be the slowest steps in the entire build. Nobody likes slow
builds.
Reviewers: rnk, dblaikie
Differential Revision: http://reviews.llvm.org/D16832
Patch by Alexander G. Riccio
llvm-svn: 259683
Summary:
This adds a new attribute which targets can set in TableGen which causes a function to be generated which matches register alternative names. This is very similar to `ShouldEmitMatchRegisterName`, except it works on alt names.
This patch is currently used by the out of tree part of the AVR backend. It reduces code duplication greatly, and has the effect that you do not need to hardcode altname to register mappings in C++.
It will not work on targets which have registers which share the same aliases.
Reviewers: stoklund, arsenm, dsanders, hfinkel, vkalintiris
Subscribers: hfinkel, dylanmckay, llvm-commits
Differential Revision: http://reviews.llvm.org/D16312
llvm-svn: 259636
This brings the compile time of Function.cpp from ~40s down to ~4s for
me locally. It also shaves off about 400KB of object file size in a
release+asserts build.
I also realized that the AMDGPU backend does not have any GCC builtin
names to match, so the extra lookup was a no-op. I removed it to silence
a zero-length string table array warning. There should be no functional
change here.
This change really ends the story of PR11951.
llvm-svn: 258897
The AMDGPU backend was the last user of the old StringMatcher
recognition code. Move it over to the new lookupLLVMIntrinsicName
funciton, which is now improved to handle all of the interesting edge
cases exposed by AMDGPU intrinsic names.
llvm-svn: 258875
Summary:
This patch is provided in preparation for removing autoconf on 1/26. The proposal to remove autoconf on 1/26 was discussed on the llvm-dev thread here: http://lists.llvm.org/pipermail/llvm-dev/2016-January/093875.html
"I felt a great disturbance in the [build system], as if millions of [makefiles] suddenly cried out in terror and were suddenly silenced. I fear something [amazing] has happened."
- Obi Wan Kenobi
Reviewers: chandlerc, grosbach, bob.wilson, tstellarAMD, echristo, whitequark
Subscribers: chfast, simoncook, emaste, jholewinski, tberghammer, jfb, danalbert, srhines, arsenm, dschuff, jyknight, dsanders, joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D16471
llvm-svn: 258861
Step one towards using a simple binary search to lookup intrinsic IDs
instead of our crazy table generated switch+memcmp+startswith code that
makes Function.cpp take about a minute to compile. See PR24785 and
PR11951 for why we should do this.
The X86 backend contains tables that need to be sorted on intrinsic ID,
so reorder those.
llvm-svn: 258757
This is a fix for https://llvm.org/bugs/show_bug.cgi?id=22796.
The previous implementation of ClassInfo::operator< allowed cycles of classes
such that x < y < z < x, meaning that a list of them cannot be correctly
sorted, and the sort order could differ with different standard libraries.
The original implementation sorted classes by ValueName if they were otherwise
equal. This isn't strictly necessary, but some backends seem to accidentally
rely on it. If I reverse this comparison I get 8 test failures spread across
the AArch64, Mips and X86 backends, so I have left it in until those backends
can be fixed.
There was one case in the X86 backend where the observable behaviour of the
assembler is changed by this patch. This was because some of the memory asm
operands were not marked as children of X86MemAsmOperand.
Differential Revision: http://reviews.llvm.org/D16141
llvm-svn: 258677
The selection process being split into separate passes, we need generic opcodes
to translate the LLVM IR to target independent code.
This patch adds an opcode for addition: G_ADD.
Differential Revision: http://reviews.llvm.org/D15472
llvm-svn: 258333
Adds the corresponding CodeGenInstruction number to each AsmWriterInst. Then write all the operand uniqueing loops using the AsmWriterInst array and indices. Then use the CodeGenInstruction index to fill out the OpCodeInfo array.
llvm-svn: 258005
This restores the previous behavior of not including the mnemonic in the classes table for every target that starts instruction lines with the mnemonic. Not only did the table size increase by 1 entry, but the class enum increased in size which caused every class in the array to increase in size. It also grew the size of the function that parsers tokens into classes by a substantial amount.
This adds a new HasMnemonicFirst flag to all AsmParsers. It's set to 1 by default and Hexagon target overrides it to 0.
For the X86 target alone this recovers 324KB of size on the llvm-mc executable.
I believe the current state is still a bad design choice for the Hexagon target as it causes most of the parsing to do a linear search through the entire match table to comparing operands against every instruction until it finds one that works. At least for the other targets we do a binary search based on mnemonic over which to do the linear scan.
llvm-svn: 256669
This reapplies r256277 with two changes:
- In emitFnAttrCompatCheck, change FuncName's type to std::string to fix
a use-after-free bug.
- Remove an unnecessary install-local target in lib/IR/Makefile.
Original commit message for r252949:
Provide a way to specify inliner's attribute compatibility and merging
rules using table-gen. NFC.
This commit adds new classes CompatRule and MergeRule to Attributes.td,
which are used to generate code to check attribute compatibility and
merge attributes of the caller and callee.
rdar://problem/19836465
llvm-svn: 256304
For targets to add their own operand types as needed, as advertised in
Operand's comment, they need to be able to specify an alternate namespace
for OperandType names too. This matches the RegisterOperand class.
llvm-svn: 256299
This reapplies r252990 and r252949. I've added member function getKind
to the Attr classes which returns the enum or string of the attribute.
Original commit message for r252949:
Provide a way to specify inliner's attribute compatibility and merging
rules using table-gen. NFC.
This commit adds new classes CompatRule and MergeRule to Attributes.td,
which are used to generate code to check attribute compatibility and
merge attributes of the caller and callee.
rdar://problem/19836465
llvm-svn: 256277
Part 1 was submitted in http://reviews.llvm.org/D15134.
Changes in this part:
* X86RegisterInfo.td, X86RecognizableInstr.cpp: Add FR128 register class.
* X86CallingConv.td: Pass f128 values in XMM registers or on stack.
* X86InstrCompiler.td, X86InstrInfo.td, X86InstrSSE.td:
Add instruction selection patterns for f128.
* X86ISelLowering.cpp:
When target has MMX registers, configure MVT::f128 in FR128RegClass,
with TypeSoftenFloat action, and custom actions for some opcodes.
Add missed cases of MVT::f128 in places that handle f32, f64, or vector types.
Add TODO comment to support f128 type in inline assembly code.
* SelectionDAGBuilder.cpp:
Fix infinite loop when f128 type can have
VT == TLI.getTypeToTransformTo(Ctx, VT).
* Add unit tests for x86-64 fp128 type.
Differential Revision: http://reviews.llvm.org/D11438
llvm-svn: 255558
AsmWriterEmitter will generate a getRegisterName function with an alternate
register name index as its second argument if the target makes use of them. The
enum of these values is generated in RegisterInfoEmitter. The getRegisterName
generator would assume the namespace could always be found by reading index 1
of the list of AltNameIndices, but this will fail if this list is sorted such
that the NoRegAltName is at index 1. Because this list is sorted by record name
(in CodeGenTarget::ReadRegAltNameIndices), you only run in to problems if your
MyTargetRegisterInfo.td defines a single RegAltNameIndex that sorts lexically
before NoRegAltName.
For example, if a target has something like
def AnAltNameIndex : RegAltNameIndex
and defines RegAltNameIndices for some registers then, prior to this change,
AsmWriterEmitter would generate references to
::AnAltNameIndex and ::NoRegAltName
Patch by Alex Bradbury!
llvm-svn: 255344
The Statistical Profiling Extension is an optional extension to
ARMv8.2-A. Since it is an optional extension, I have added the
FeatureSPE subtarget feature to control it. The assembler-visible parts
of this extension are the new "psb csync" instruction, which is
equivalent to "hint #17", and a number of system registers.
Differential Revision: http://reviews.llvm.org/D15021
llvm-svn: 254401
Duplicate a few common definitions between DFAPacketizer.cpp and
DFAPacketizerEmitter.cpp to avoid including files from CodeGen
in TableGen.
llvm-svn: 253820
Extended DFA tablegen to:
- added "-debug-only dfa-emitter" support to llvm-tblgen
- defined CVI_PIPE* resources for the V60 vector coprocessor
- allow specification of multiple required resources
- supports ANDs of ORs
- e.g. [SLOT2, SLOT3], [CVI_MPY0, CVI_MPY1] means:
(SLOT2 OR SLOT3) AND (CVI_MPY0 OR CVI_MPY1)
- added support for combo resources
- allows specifying ORs of ANDs
- e.g. [CVI_XLSHF, CVI_MPY01] means:
(CVI_XLANE AND CVI_SHIFT) OR (CVI_MPY0 AND CVI_MPY1)
- increased DFA input size from 32-bit to 64-bit
- allows for a maximum of 4 AND'ed terms of 16 resources
- supported expressions now include:
expression => term [AND term] [AND term] [AND term]
term => resource [OR resource]*
resource => one_resource | combo_resource
combo_resource => (one_resource [AND one_resource]*)
Author: Dan Palermo <dpalermo@codeaurora.org>
kparzysz: Verified AMDGPU codegen to be unchanged on all llc
tests, except those dealing with instruction encodings.
Reapply the previous patch, this time without circular dependencies.
llvm-svn: 253793
Extended DFA tablegen to:
- added "-debug-only dfa-emitter" support to llvm-tblgen
- defined CVI_PIPE* resources for the V60 vector coprocessor
- allow specification of multiple required resources
- supports ANDs of ORs
- e.g. [SLOT2, SLOT3], [CVI_MPY0, CVI_MPY1] means:
(SLOT2 OR SLOT3) AND (CVI_MPY0 OR CVI_MPY1)
- added support for combo resources
- allows specifying ORs of ANDs
- e.g. [CVI_XLSHF, CVI_MPY01] means:
(CVI_XLANE AND CVI_SHIFT) OR (CVI_MPY0 AND CVI_MPY1)
- increased DFA input size from 32-bit to 64-bit
- allows for a maximum of 4 AND'ed terms of 16 resources
- supported expressions now include:
expression => term [AND term] [AND term] [AND term]
term => resource [OR resource]*
resource => one_resource | combo_resource
combo_resource => (one_resource [AND one_resource]*)
Author: Dan Palermo <dpalermo@codeaurora.org>
kparzysz: Verified AMDGPU codegen to be unchanged on all llc
tests, except those dealing with instruction encodings.
llvm-svn: 253790
We use to have an odd difference among MapVector and SetVector. The map
used a DenseMop, but the set used a SmallSet, which in turn uses a
std::set.
I have changed SetVector to use a DenseSet. If you were depending on the
old behaviour you can pass an explicit set type or use SmallSetVector.
The common cases for needing to do it are:
* Optimizing for small sets.
* Sets for types not supported by DenseSet.
llvm-svn: 253439
Allowing imprecise lane masks in case of more than 32 sub register lanes
lead to some tricky corner cases, and I need another bugfix for another
one. Instead I rather declare lane masks as precise and let tablegen
abort if we do not have enough bits.
This does not affect any in-tree target, even AMDGPU only needs 16 lanes
at the moment. If the 32 lanes turn out to be a problem in the future,
then we can easily change the LaneBitmask typedef to uint64_t.
Differential Revision: http://reviews.llvm.org/D14557
llvm-svn: 253279
MCSubtargetInfo in the subclasses into MCTargetAsmParser and define a
member function getSTI.
This is done in preparation for making changes to shrink the size of
MCRelaxableFragment. (see http://reviews.llvm.org/D14346).
llvm-svn: 253124
This reapplies r252949. I've changed the type of FuncName to be
std::string instead of StringRef in emitFnAttrCompatCheck.
Original commit message for r252949:
Provide a way to specify inliner's attribute compatibility and merging
rules using table-gen. NFC.
This commit adds new classes CompatRule and MergeRule to Attributes.td,
which are used to generate code to check attribute compatibility and
merge attributes of the caller and callee.
rdar://problem/19836465
llvm-svn: 252990
rules using table-gen. NFC.
This commit adds new classes CompatRule and MergeRule to Attributes.td,
which are used to generate code to check attribute compatibility and
merge attributes of the caller and callee.
rdar://problem/19836465
llvm-svn: 252949
This is a step towards consolidating some of the information regarding
attributes in a single place.
This patch moves the enum attributes in Attributes.h to the table-gen
file. Additionally, it adds definitions of target independent string
attributes that will be used in follow-up commits by the inliner to
check attribute compatibility.
rdar://problem/19836465
llvm-svn: 252796
There is no need to generate separate table for intrinsics mod ref behaviour.
It can now be determined purely from function attributes.
Differential Revision: http://reviews.llvm.org/D13917
llvm-svn: 251040
Catchret transfers control from a catch funclet to an earlier funclet.
However, it is not completely clear which funclet the catchret target is
part of. Make this clear by stapling the catchret target's funclet
membership onto the CATCHRET SDAG node.
llvm-svn: 249052
Previously the code added an extra nullptr entry to a static array and then created an ArrayRef with a size one less than the static array. If there were no other entries the array would just contain the nullptr and the ArrayRef would be crated with size 0.
Instead, put the right number of entries in the array and explicitly emit 'None' if the size would be 0. This allows the static array constructor of makeArrayRef to be used.
llvm-svn: 248244
Summary:
This is the first patch in the series to migrate Triple's (which are ambiguous)
to TargetTuple's (which aren't).
For the moment, TargetTuple simply passes all requests to the Triple object it
holds. Once it has replaced Triple, it will start to implement the interface in
a more suitable way.
This change makes some changes to the public C++ API. In particular,
InitMCSubtargetInfo(), createMCRelocationInfo(), and createMCSymbolizer()
now take TargetTuples instead of Triples. The other public C++ API's have
been left as-is for the moment to reduce patch size.
This commit also contains a trivial patch to clang to account for the C++ API
change. Thanks go to Pavel Labath for fixing LLDB for me.
Reviewers: rengolin
Subscribers: jyknight, dschuff, arsenm, rampitec, danalbert, srhines, javed.absar, dsanders, echristo, emaste, jholewinski, tberghammer, ted, jfb, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D10969
llvm-svn: 247692
Summary:
This is the first patch in the series to migrate Triple's (which are ambiguous)
to TargetTuple's (which aren't).
For the moment, TargetTuple simply passes all requests to the Triple object it
holds. Once it has replaced Triple, it will start to implement the interface in
a more suitable way.
This change makes some changes to the public C++ API. In particular,
InitMCSubtargetInfo(), createMCRelocationInfo(), and createMCSymbolizer()
now take TargetTuples instead of Triples. The other public C++ API's have
been left as-is for the moment to reduce patch size.
This commit also contains a trivial patch to clang to account for the C++ API
change.
Reviewers: rengolin
Subscribers: jyknight, dschuff, arsenm, rampitec, danalbert, srhines, javed.absar, dsanders, echristo, emaste, jholewinski, tberghammer, ted, jfb, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D10969
llvm-svn: 247683
Summary: This fixes a variety of typos in docs, code and headers.
Subscribers: jholewinski, sanjoy, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D12626
llvm-svn: 247495
Except the changes that defined virtual destructors as =default, because that
ran into problems with GCC 4.7 and overriding methods that weren't noexcept.
llvm-svn: 247298
Summary:
Add the necessary plumbing so that llvm_token_ty can be used as an
argument/return type in intrinsic definitions and correspondingly require
TokenTy in function types. TokenTy is an opaque type that has no target
lowering, but can be used in machine-independent intrinsics. It is
required for the upcoming llvm.eh.padparam intrinsic.
Reviewers: majnemer, rnk
Subscribers: stoklund, llvm-commits
Differential Revision: http://reviews.llvm.org/D12532
llvm-svn: 246651
I locally hit the 255 limit, but a lot of these are redundant: each
predicate coming from a different record was allocated a new number,
even when we already emitted the same code for another predicate.
Instead, re-use numbers and emit the predicate code only once.
This reduces the total text size of *DAGISel.cpp.o by ~1%.
llvm-svn: 246208
After r244870 flush() will only compare two null pointers and return,
doing nothing but wasting run time. The call is not required any more
as the stream and its SmallString are always in sync.
Thanks to David Blaikie for reviewing.
llvm-svn: 244928
preparation for de-coupling the AA implementations.
In order to do this, they had to become fake-scoped using the
traditional LLVM pattern of a leading initialism. These can't be actual
scoped enumerations because they're bitfields and thus inherently we use
them as integers.
I've also renamed the behavior enums that are specific to reasoning
about the mod/ref behavior of functions when called. This makes it more
clear that they have a very narrow domain of applicability.
I think there is a significantly cleaner API for all of this, but
I don't want to try to do really substantive changes for now, I just
want to refactor the things away from analysis groups so I'm preserving
the exact original design and just cleaning up the names, style, and
lifting out of the class.
Differential Revision: http://reviews.llvm.org/D10564
llvm-svn: 242963
This patch does the following:
* Fix FIXME on `needsStackRealignment`: it is now shared between multiple targets, implemented in `TargetRegisterInfo`, and isn't `virtual` anymore. This will break out-of-tree targets, silently if they used `virtual` and with a build error if they used `override`.
* Factor out `canRealignStack` as a `virtual` function on `TargetRegisterInfo`, by default only looks for the `no-realign-stack` function attribute.
Multiple targets duplicated the same `needsStackRealignment` code:
- Aarch64.
- ARM.
- Mips almost: had extra `DEBUG` diagnostic, which the default implementation now has.
- PowerPC.
- WebAssembly.
- x86 almost: has an extra `-force-align-stack` option, which the default implementation now has.
The default implementation of `needsStackRealignment` used to just return `false`. My current patch changes the behavior by simply using the above shared behavior. This affects:
- AMDGPU
- BPF
- CppBackend
- MSP430
- NVPTX
- Sparc
- SystemZ
- XCore
- Out-of-tree targets
This is a breaking change! `make check` passes.
The only implementation of the `virtual` function (besides the slight different in x86) was Hexagon (which did `MF.getFrameInfo()->getMaxAlignment() > 8`), and potentially some out-of-tree targets. Hexagon now uses the default implementation.
`needsStackRealignment` was being overwritten in `<Target>GenRegisterInfo.inc`, to return `false` as the default also did. That was odd and is now gone.
Reviewers: sunfish
Subscribers: aemerson, llvm-commits, jfb
Differential Revision: http://reviews.llvm.org/D11160
llvm-svn: 242727
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.
This patch is quite boring overall, except for some uglyness in
ASMPrinter which has a getDataLayout function but has some clients
that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so
some methods are taking a DataLayout as parameter.
Reviewers: echristo
Subscribers: yaron.keren, rafael, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D11090
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242386
When FixedLenDecoder matches an input bitpattern of form [01]+ with an
instruction bitpattern of form [01?]+ (where 0/1 are static bits and ? are
mixed/variable bits) it passes the input bitpattern to a specific instruction
decoder method which then makes a final decision whether the bitpattern is a
valid instruction or not. This means the decoder must handle all possible
values of the variable bits which sometimes leads to opcode rewrites in the
decoder method when the instructions are not fully orthogonal.
The patch provides a way for the decoder method to say that when it returns
Fail it does not necessarily mean the bitpattern is invalid, but rather that
the bitpattern is definitely not an instruction that is recognized by the
decoder method. The decoder can then try to match the input bitpattern with
other possible instruction bitpatterns.
For example, this allows to solve a situation on AArch64 where the `MSR
(immediate)` instruction has form:
1101 0101 0000 0??? 0100 ???? ???1 1111
but not all values of the ? bits are allowed. The rejected values should be
handled by the `extended MSR (register)` instruction:
1101 0101 000? ???? ???? ???? ???? ????
The decoder will first try to decode an input bitpattern that matches both
bitpatterns as `MSR (immediate)` but currently this puts the decoder method of
`MSR (immediate)` into a situation when it must be able to decode all possible
values of the ? bits, i.e. it would need to rewrite the instruction to `MSR
(register)` when it is not `MSR (immediate)`.
The patch allows to specify that the decoder method cannot determine if the
instruction is valid for all variable values. The decoder method can simply
return Fail when it knows it is definitely not `MSR (immediate)`. The decoder
will then backtrack the decoding and find that it can match the input
bitpattern with the more generic `MSR (register)` bitpattern too.
Differential Revision: http://reviews.llvm.org/D7174
llvm-svn: 242274
In this patch I have only encoding. Intrinsics and DAG lowering will be in the next patch.
I temporary removed the old intrinsics test (just to split this patch).
Half types are not covered here.
Differential Revision: http://reviews.llvm.org/D11134
llvm-svn: 242023
Force all creators of `MCSubtargetInfo` to immediately initialize it,
merging the default constructor and the initializer into an initializing
constructor. Besides cleaning up the code a little, this makes it clear
that the initializer is never called again later.
Out-of-tree backends need a trivial change: instead of calling:
auto *X = new MCSubtargetInfo();
InitXYZMCSubtargetInfo(X, ...);
return X;
they should call:
return createXYZMCSubtargetInfoImpl(...);
There's no real functionality change here.
llvm-svn: 241957
Summary:
The target frame lowering's concrete type is always known in RegisterInfo, yet it's only sometimes devirtualized through a static_cast. This change adds an auto-generated static function <Target>GenRegisterInfo::getFrameLowering(const MachineFunction &MF) which does this devirtualization, and uses this function in all targets which can.
This change was suggested by sunfish in D11070 for WebAssembly, I figure that I may as well improve the other targets while I'm here.
Subscribers: sunfish, ted, llvm-commits, jfb
Differential Revision: http://reviews.llvm.org/D11093
llvm-svn: 241921
Summary:
Initially, these intrinsics seemed like part of a family of "frame"
related intrinsics, but now I think that's more confusing than helpful.
Initially, the LangRef specified that this would create a new kind of
allocation that would be allocated at a fixed offset from the frame
pointer (EBP/RBP). We ended up dropping that design, and leaving the
stack frame layout alone.
These intrinsics are really about sharing local stack allocations, not
frame pointers. I intend to go further and add an `llvm.localaddress()`
intrinsic that returns whatever register (EBP, ESI, ESP, RBX) is being
used to address locals, which should not be confused with the frame
pointer.
Naming suggestions at this point are welcome, I'm happy to re-run sed.
Reviewers: majnemer, nicholas
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11011
llvm-svn: 241633
represented by uint64_t, this patch replaces these
usages with the FeatureBitset (std::bitset) type.
Differential Revision: http://reviews.llvm.org/D10542
llvm-svn: 241058
This commit implements serialization of the register mask machine
operands. This commit serializes only the call preserved register
masks that are defined by a target, it doesn't serialize arbitrary
register masks.
This commit also extends the TargetRegisterInfo class and TableGen so that
the users of TRI can get the list of all the call preserved register masks and
their names.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10673
llvm-svn: 240966
Before this we were producing a TargetExternalSymbol from a MCSymbol.
That meant extracting the symbol name and fetching the symbol again
down the pipeline.
This patch adds a DAG.getMCSymbol that lets the MCSymbol pass unchanged on the
DAG.
Doing so removes the need for MO_NOPREFIX and fixes the root cause of pr23900,
allowing r240130 to be committed again.
llvm-svn: 240300
Summary:
This instruction encodes a loading operation that may fault, and a label
to branch to if the load page-faults. The locations of potentially
faulting loads and their "handler" destinations are recorded in a
FaultMap section, meant to be consumed by LLVM's clients.
Nothing generates FAULTING_LOAD_OP instructions yet, but they will be
used in a future change.
The documentation (FaultMaps.rst) needs improvement and I will update
this diff with a more expanded version shortly.
Depends on D10196
Reviewers: rnk, reames, AndyAyers, ab, atrick, pgavlin
Reviewed By: atrick, pgavlin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10197
llvm-svn: 239740
Summary:
This continues the patch series to eliminate StringRef forms of GNU triples
from the internals of LLVM that began in r239036.
Reviewers: rafael
Reviewed By: rafael
Subscribers: rafael, ted, jfb, llvm-commits, rengolin, jholewinski
Differential Revision: http://reviews.llvm.org/D10311
llvm-svn: 239467
If the type isn't trivially moveable emplace can skip a potentially
expensive move. It also saves a couple of characters.
Call sites were found with the ASTMatcher + some semi-automated cleanup.
memberCallExpr(
argumentCountIs(1), callee(methodDecl(hasName("push_back"))),
on(hasType(recordDecl(has(namedDecl(hasName("emplace_back")))))),
hasArgument(0, bindTemporaryExpr(
hasType(recordDecl(hasNonTrivialDestructor())),
has(constructExpr()))),
unless(isInTemplateInstantiation()))
No functional change intended.
llvm-svn: 238602
Fixes PR23455, where, when TableGen generates the matcher from the
AsmString, it splits "cmp${cc}ss" into tokens, and the "ss" suffix
is recognized as the SS register.
I can't think of a situation where that's a feature, not a bug, hence:
when a token is "isolated", i.e., it is followed and preceded by
separators, it shouldn't be parsed as a register.
Differential Revision: http://reviews.llvm.org/D9844
llvm-svn: 238536
shrinking the Size and NumDefs fields to offset the size growth, and
reordering the fields to preserve a good packing.
This is necessary in the short term for adding a convergent flag, and
simultaneously future-proofs us against more flags being added in the
future.
llvm-svn: 238445
If there is an InstAlias defined for an instruction that had a custom
converter (AsmMatchConverter), then when the alias is matched,
the custom converter will be used rather than the converter generated
by the InstAlias.
This patch adds the UseInstAsmMatchConverter field to the InstAlias
class, which allows you to override this behavior and force the
converter generated by the InstAlias to be used.
This is required for some future improvemnts to the R600 assembler.
Differential Revision: http://reviews.llvm.org/D9083
llvm-svn: 238210
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
The first several times this was committed (e.g. r229831, r233055), it caused several buildbot failures.
Apparently the reason for most failures was both clang and gcc's inability to deal with large numbers (> 10K) of bitset constructor calls in tablegen-generated initializers of instruction info tables.
This should now be fixed.
llvm-svn: 238192
in POWER8:
vadduqm
vaddeuqm
vaddcuq
vaddecuq
vsubuqm
vsubeuqm
vsubcuq
vsubecuq
In addition to adding the instructions themselves, it also adds support for the
v1i128 type for intrinsics (Intrinsics.td, Function.cpp, and
IntrinsicEmitter.cpp).
http://reviews.llvm.org/D9081
llvm-svn: 238144
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
The first two times this was committed (r229831, r233055), it caused several buildbot failures.
At least some of the ARM and MIPS ones were due to gcc/binutils issues, and should now be fixed.
llvm-svn: 237234
cleanups.
Also, change code in tablegen which printed a message and then called
"exit(1)" to use PrintFatalError, instead.
This fixes instances where an empty output file was left behind after
a failed tablegen invocation, which would confuse subsequent ninja
runs into not attempting to rebuild.
Differential Revision: http://reviews.llvm.org/D9608
llvm-svn: 237058
The v1i128 type is needed for the quadword add/substract instructions introduced
in POWER8. Futhermore, the PowerPC ABI specifies that parameters of type v1i128
are to be passed in a single vector register, while parameters of type i128 are
passed in pairs of GPRs. Thus, it is necessary to be able to differentiate
between v1i128 and i128 in LLVM.
http://reviews.llvm.org/D8564
llvm-svn: 235198
The patch is generated using clang-tidy misc-use-override check.
This command was used:
tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py \
-checks='-*,misc-use-override' -header-filter='llvm|clang' \
-j=32 -fix -format
http://reviews.llvm.org/D8925
llvm-svn: 234679
Summary:
The loop which emits AssemblerPredicate conditions also links them together by emitting a '&&'.
If the 1st predicate is not an AssemblerPredicate, while the 2nd one is, nothing gets emitted for the 1st one, but we still emit the '&&' because of the 2nd predicate.
This generated code looks like "( && Cond2)" and is invalid.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: dsanders, llvm-commits
Differential Revision: http://reviews.llvm.org/D8294
llvm-svn: 234312
Specify an allocation order with a register class. This is used by register
allocators with a greedy heuristic. This is usefull as it is sometimes
beneficial to color more constrained classes first.
Differential Revision: http://reviews.llvm.org/D8626
llvm-svn: 233743
per-function subtarget.
Currently, code-gen passes the default or generic subtarget to the constructors
of MCInstPrinter subclasses (see LLVMTargetMachine::addPassesToEmitFile), which
enables some targets (AArch64, ARM, and X86) to change their instprinter's
behavior based on the subtarget feature bits. Since the backend can now use
different subtargets for each function, instprinter has to be changed to use the
per-function subtarget rather than the default subtarget.
This patch takes the first step towards enabling instprinter to change its
behavior based on the per-function subtarget. It adds a bit "PassSubtarget" to
AsmWriter which tells table-gen to pass a reference to MCSubtargetInfo to the
various print methods table-gen auto-generates.
I will follow up with changes to instprinters of AArch64, ARM, and X86.
llvm-svn: 233411
This reverts commit r233055.
It still causes buildbot failures (gcc running out of memory on several platforms, and a self-host failure on arm), although less than the previous time.
llvm-svn: 233068
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
The first time this was committed (r229831), it caused several buildbot failures.
At least some of the ARM ones were due to gcc/binutils issues, and should now be fixed.
Differential Revision: http://reviews.llvm.org/D8542
llvm-svn: 233055
This is needed for AVX512 masked scatter/gather support.
The R600 change is necessary to remove a hack that was working around the lack of multiple results.
llvm-svn: 232798
Some subregisters are only to indicate different access sizes, while not
providing any way to actually divide the register up into multiple
disjunct parts. Avoid tracking subregister liveness in these cases as it
is not beneficial.
Differential Revision: http://reviews.llvm.org/D8429
llvm-svn: 232695
When calculating the lanemask of a register class we have to include the
masks of subregisters supported by any of the class members, not just
the ones supported by all class members.
This fixes problems when coalescing towards a subclass with additional
subregisters available.
The attached testcase works fine as is, but does crash if you enable
subregister liveness on x86 without this change applied.
llvm-svn: 232652
Now that SmallString is a first-class citizen, most SmallString::str()
calls are not required. This patch removes a whole bunch of them, yet
there are lots more.
There are two use cases where str() is really needed:
1) To use one of StringRef member functions which is not available in
SmallString.
2) To convert to std::string, as StringRef implicitly converts while
SmallString do not. We may wish to change this, but it may introduce
ambiguity.
llvm-svn: 232622
clang-cl would warn that this value is not representable in 'int':
enum { FeatureX = 1ULL << 31 };
All MS enums are 'ints' unless otherwise specified, so we have to use an
explicit type. The AMDGPU target just hit 32 features, triggering this
warning.
Now that we have C++11 strong enum types, we can also eliminate the
'const uint64_t' codepath from tablegen and just use 'enum : uint64_t'.
llvm-svn: 231697
This is what all the targets check for and is consistent with the
initialized value of MissingFeatures, which is sometimes assinged
to ErrorInfo.
llvm-svn: 231397
This should help with the AVX512 masked gather changes Elena is working on. This patch is derived from some of the changes Elena made to tablegen, but modified by me to support arbitrary number of results.
llvm-svn: 231357
Accidentally committed a few more of these cleanup changes than
intended. Still breaking these out & tidying them up.
This reverts commit r231135.
llvm-svn: 231136
There doesn't seem to be any need to assert that iterator assignment is
between iterators over the same node - if you want to reuse an iterator
variable to iterate another node, that's perfectly acceptable. Just
don't mix comparisons between iterators into disjoint sequences, as
usual.
llvm-svn: 231135
All of the cases were just appending from random access iterators to a
vector. Using insert/append can grow the vector to the perfect size
directly and moves the growing out of the loop. No intended functionalty
change.
llvm-svn: 230845
The keys of the map are unique by pointer address, so there's no need
to use the llvm::less comparator. This allows us to use DenseMap
instead, which reduces tblgen time by 20% on my stress test.
llvm-svn: 230769
Gather and scatter instructions additionally write to one of the source operands - mask register.
In this case Gather has 2 destination values - the loaded value and the mask.
Till now we did not support code gen pattern for gather - the instruction was generated from
intrinsic only and machine node was hardcoded.
When we introduce the masked_gather node, we need to select instruction automatically,
in the standard way.
I added a flag "hasTwoExplicitDefs" that allows to handle 2 destination operands.
(Some code in the X86InstrFragmentsSIMD.td is commented out, just to split one big
patch in many small patches)
llvm-svn: 230471
Everyone except R600 was manually passing the length of a static array
at each callsite, calculated in a variety of interesting ways. Far
easier to let ArrayRef handle that.
There should be no functional change, but out of tree targets may have
to tweak their calls as with these examples.
llvm-svn: 230118
Previously, subtarget features were a bitfield with the underlying type being uint64_t.
Since several targets (X86 and ARM, in particular) have hit or were very close to hitting this bound, switching the features to use a bitset.
No functional change.
Differential Revision: http://reviews.llvm.org/D7065
llvm-svn: 229831
Gather and Scatter are new introduced intrinsics, comming after recently implemented masked load and store.
This is the first patch for Gather and Scatter intrinsics. It includes only the syntax, parsing and verification.
Gather and Scatter intrinsics allow to perform multiple memory accesses (read/write) in one vector instruction.
The intrinsics are not target specific and will have the following syntax:
Gather:
declare <16 x i32> @llvm.masked.gather.v16i32(<16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1> <mask>, <16 x i32> <passthru>)
declare <8 x float> @llvm.masked.gather.v8f32(<8 x float*><vector of ptrs>, i32 <alignment>, <8 x i1> <mask>, <8 x float><passthru>)
Scatter:
declare void @llvm.masked.scatter.v8i32(<8 x i32><vector value to be stored> , <8 x i32*><vector of ptrs> , i32 <alignment>, <8 x i1> <mask>)
declare void @llvm.masked.scatter.v16i32(<16 x i32> <vector value to be stored> , <16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1><mask> )
Vector of ptrs - a set of source/destination addresses, to load/store the value.
Mask - switches on/off vector lanes to prevent memory access for switched-off lanes
vector of ptrs, value and mask should have the same vector width.
These are code examples where gather / scatter should be used and will allow function vectorization
;void foo1(int * restrict A, int * restrict B, int * restrict C) {
; for (int i=0; i<SIZE; i++) {
; A[i] = B[C[i]];
; }
;}
;void foo3(int * restrict A, int * restrict B) {
; for (int i=0; i<SIZE; i++) {
; A[B[i]] = i+5;
; }
;}
Tests will come in the following patches, with CodeGen and Vectorizer.
http://reviews.llvm.org/D7433
llvm-svn: 228521
Similar to the C++14 void specializations of these templates, useful as
a stop-gap until LLVM switches to '14.
Example use-cases in tblgen because I saw some functors that looked like
they could be simplified/refactored.
Reviewers: dexonsmith
Differential Revision: http://reviews.llvm.org/D7324
llvm-svn: 227828
The hot path through this region of code does lots of batch inserts into sets. By storing them as sorted arrays, we can defer the sorting to the end of the batch, which is dramatically more efficient. This reduces tblgen runtime by 25% on my worst-case target.
llvm-svn: 227682
This is a continuation of my prior work to move some of the inner workings for CodeGenRegister to use bit vectors when computing about register units. This is highly beneficial to TableGen runtime on targets with large, dense register files. This patch represents a ~40% runtime reduction over and above my earlier improvement on a stress test of this case.
llvm-svn: 227678
For target descriptions with very large and very dense register files, TableGen
can take an extremely long time to run. This change makes a dent in that (~15%
in my measurements) by accelerating the single hottest operation with better data
structures.
I believe there's still a lot of room to make this even faster with more global
changes that require replacing some of the existing datastructures in this area
with bit vectors, but that's a more involved change and I wanted to get this
simpler improvement in first.
llvm-svn: 227562
derived classes.
Since global data alignment, layout, and mangling is often based on the
DataLayout, move it to the TargetMachine. This ensures that global
data is going to be layed out and mangled consistently if the subtarget
changes on a per function basis. Prior to this all targets(*) have
had subtarget dependent code moved out and onto the TargetMachine.
*One target hasn't been migrated as part of this change: R600. The
R600 port has, as a subtarget feature, the size of pointers and
this affects global data layout. I've currently hacked in a FIXME
to enable progress, but the port needs to be updated to either pass
the 64-bitness to the TargetMachine, or fix the DataLayout to
avoid subtarget dependent features.
llvm-svn: 227113
Specifically, gc.result benefits from this greatly. Instead of:
gc.result.int.*
gc.result.float.*
gc.result.ptr.*
...
We now have a gc.result.* that can specialize to literally any type.
Differential Revision: http://reviews.llvm.org/D7020
llvm-svn: 226857
This patch was generated by a clang tidy checker that is being open sourced.
The documentation of that checker is the following:
/// The emptiness of a container should be checked using the empty method
/// instead of the size method. It is not guaranteed that size is a
/// constant-time function, and it is generally more efficient and also shows
/// clearer intent to use empty. Furthermore some containers may implement the
/// empty method but not implement the size method. Using empty whenever
/// possible makes it easier to switch to another container in the future.
Patch by Gábor Horváth!
llvm-svn: 226161
This adds support for creating an InstAlias with a negative immediate, i.e.:
def NOT : InstAlias<"not $dst, $src", (XORI GR32:$dst, GR32:$src, -1)>;
by resolving this problem:
RISCVGenAsmMatcher.inc:95:11: error: expected '= constant-expression' or end of enumerator definition
CVT_imm_-1,
^^^^^^^^^^
Patch by Jordy Potman, thanks!
llvm-svn: 226073
These intrinsics allow multiple functions to share a single stack
allocation from one function's call frame. The function with the
allocation may only perform one allocation, and it must be in the entry
block.
Functions accessing the allocation call llvm.recoverframeallocation with
the function whose frame they are accessing and a frame pointer from an
active call frame of that function.
These intrinsics are very difficult to inline correctly, so the
intention is that they be introduced rarely, or at least very late
during EH preparation.
Reviewers: echristo, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D6493
llvm-svn: 225746
This adds two new fields to the RegisterOperand TableGen class:
string OperandNamespace = "MCOI";
string OperandType = "OPERAND_REGISTER";
These fields can be used to specify a target specific operand type,
which will be stored in the OperandType member of the MCOperandInfo
object.
This can be useful for targets that need to store some extra information
about operands that cannot be expressed using the target independent
types. For example, in the R600 backend, there are operands which
can take either registers or immediates and it is convenient to be able
to specify this in the TableGen definitions.
llvm-svn: 225661
Requires new AsmParserOperand types that detect 16-bit and 32/64-bit mode so that we choose the right instruction based on default sizing without predicates. This is necessary since predicates mess up the disassembler table building.
llvm-svn: 225256
This is necessary to allow the disassembler to be able to handle AdSize32 instructions in 64-bit mode when address size prefix is used.
Eventually we should probably also support 'addr32' and 'addr16' in the assembler to override the address size on some of these instructions. But for now we'll just use special operand types that will lookup the current mode size to select the right instruction.
llvm-svn: 225075
This removes a hardcoded list of instructions in the CodeEmitter. Eventually I intend to remove the predicates on the affected instructions since in any given mode two of them are valid if we supported addr32/addr16 prefixes in the assembler.
llvm-svn: 224809
An instruction alias defined with InstAlias and an optional operand in the
middle of the AsmString field, "..${a} <operands>", would get the final
"}" printed in the instruction disassembly. This wouldn't happen if the optional
operand appeared as the last item in the AsmString which is how the current
backends avoided the problem.
There don't appear to be any tests for this part of Tablegen but it passes the
pre-commit tests. Manually tested the change by enabling the generic alias
printer in the ARM backend and checking the output.
Differential Revision: http://reviews.llvm.org/D6529
llvm-svn: 224348
On X86, the Intel asm parser tries to match all memory operand sizes when
none is explicitly specified. For LEA, which doesn't really have a memory
operand (just a pointer one), this results in multiple successful matches,
one for each memory size. There's no error because it's same opcode, so
really, it's just one match. However, the tablegen'd matcher function
adds opcode/operands to the passed MCInst, and this results in multiple
duplicated operands.
This commit clears the MCInst in the tablegen'd matcher function.
We sometimes clear it when the match failed, so there's no expectation of
keeping the previous content anyway.
Differential Revision: http://reviews.llvm.org/D6670
llvm-svn: 224347
Clang's static analyzer found several potential cases of undefined
behavior, use of un-initialized values, and potentially null pointer
dereferences in tablegen, Support, MC, and ADT. This cleans them up
with specific assertions on the assumptions of the code.
llvm-svn: 224154
Let tablegen compute the combination of subregister lanemasks for all
subregisters in a register/register class. This is preparation for further
work subregister allocation
llvm-svn: 223873
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.
Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
llvm-svn: 223348
This complicates a few algorithms due to not having random access, but
not by a huge degree I don't think (open to debate/design
discussion/etc).
llvm-svn: 223261
This is the second patch in a small series. This patch contains the MachineInstruction and x86-64 backend pieces required to lower Statepoints. It does not include the code to actually generate the STATEPOINT machine instruction and as a result, the entire patch is currently dead code. I will be submitting the SelectionDAG parts within the next 24-48 hours. Since those pieces are by far the most complicated, I wanted to minimize the size of that patch. That patch will include the tests which exercise the functionality in this patch. The entire series can be seen as one combined whole in http://reviews.llvm.org/D5683.
The STATEPOINT psuedo node is generated after all gc values are explicitly spilled to stack slots. The purpose of this node is to wrap an actual call instruction while recording the spill locations of the meta arguments used for garbage collection and other purposes. The STATEPOINT is modeled as modifing all of those locations to prevent backend optimizations from forwarding the value from before the STATEPOINT to after the STATEPOINT. (Doing so would break relocation semantics for collectors which wish to relocate roots.)
The implementation of STATEPOINT is closely modeled on PATCHPOINT. Eventually, much of the code in this patch will be removed. The long term plan is to merge the functionality provided by statepoints and patchpoints. Merging their implementations in the backend is likely to be a good starting point.
Reviewed by: atrick, ributzka
llvm-svn: 223085
Order matters for this container, it seems (using a forward_list and
replacing the original push_backs with emplace_fronts caused test
failures). I didn't look too deeply into why.
(& in retrospect, I might go back & change some of the forward_lists I
introduced to deques anyway - since most don't require removal, deque is
a more memory-friendly data structure (moderate locality while not
invalidating pointers))
llvm-svn: 222950
Seems libstdc++ on some buildbots is lacking std::map::emplace, which is
weird... reverting while I look into it.
This reverts commit r222937.
llvm-svn: 222939
Pointers and references to map elements are never invalidated (except on
removal, which isn't used here) so there's no need for the indirection
unless there's polymorphism at work.
A little const correctness had to be fixed, since the indirection
allowed some benign const violations.
llvm-svn: 222937
This reverts commit r222632 (and follow-up r222636), which caused a host
of LNT failures on an internal bot. I'll respond to the commit on the
list with a reproduction of one of the failures.
Conflicts:
lib/Target/X86/X86TargetTransformInfo.cpp
llvm-svn: 222936
Since the elements were not polymorphic, the unique_ptr was only used to
avoid pointer invalidation on container resizes - might as well skip the
indirection and use a container with suitable invalidation semantics.
llvm-svn: 222931
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
llvm-svn: 222632
Primarily done by using SequenceToOffsetTable to reduce the register pressure set tables and then sizing the indices into the tables appropriately. Size a few other table entries based on content as well. Reduces X86RegisterInfo.o by ~9k.
llvm-svn: 222621
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.
This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...
llvm-svn: 222334
Having two ways to do this doesn't seem terribly helpful and
consistently using the insert version (which we already has) seems like
it'll make the code easier to understand to anyone working with standard
data structures. (I also updated many references to the Entry's
key and value to use first() and second instead of getKey{Data,Length,}
and get/setValue - for similar consistency)
Also removes the GetOrCreateValue functions so there's less surface area
to StringMap to fix/improve/change/accommodate move semantics, etc.
llvm-svn: 222319
StringSet is still a bit dodgy in that it exposes the raw iterator of
the StringMap parent, which exposes the weird detail that StringSet
actually has a 'value'... but anyway, this is useful for a handful of
clients that want to reference the newly inserted/persistent string data
in the StringSet/Map/Entry/thing.
llvm-svn: 222302
This reverts commit r222183.
Broke on the MSVC buildbots due to MSVC not producing default move
operations - I'd fix it immediately but just broke my build system a
bit, so backing out until I have a chance to get everything going again.
llvm-svn: 222187
The next step is to actually use unique_ptr in TreePatternNode's
Children vector. That will be more intrusive, and may not work,
depending on exactly how these things are handled (I have a bad
suspicion things are shared more than they should be, making this more
DAG than tree - but if it's really a tree, unique_ptr should suffice)
llvm-svn: 222183
Indices into the table are stored in each MCRegisterClass instead of a pointer. A new method, getRegClassName, is added to MCRegisterInfo and TargetRegisterInfo to lookup the string in the table.
llvm-svn: 222118
based on instruction complexity
The order that tablegen fast-isel instruction code is generated is
currently based on the text of the predicate (using string
less-than). This patch changes this to instead use the instruction
complexity. Because the complexities are not unique a C++ multimap is
used instead of a map.
This fixes the problem where code with no predicate always comes out
first (the empty string always compares as less than all other
strings) thus making the code with predicates dead code. See the FMUL
code in PPCFastISel.cpp for an example. It also more closely matches
the normal codegen ordering. Some error checking in the tablegen
fast-isel code is fixed as well.
Patch by Bill Seurer.
llvm-svn: 222038
The problem is mostly that variadic output instruction
aren't handled, so it is rejected for having an inconsistent
number of operands, and then the right number of operands
isn't emitted.
llvm-svn: 221117
Summary:
CustomCallingConv is simply a CallingConv that tablegen should not generate the
implementation for. It allows regular CallingConv's to delegate to these custom
functions. This is (currently) necessary for Mips and we cannot use CCCustom
without having to adapt to the different API that CCCustom uses.
This brings us a bit closer to being able to remove
MipsCC::analyzeCallOperands and MipsCC::analyzeFormalArguments in favour of
the common implementation.
No functional change to the targets.
Depends on D3341
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: vmedic, llvm-commits
Differential Revision: http://reviews.llvm.org/D5965
llvm-svn: 221052
FastISel has a fixed set of virtual functions that are overridden by the
tablegen-generated code for each target. These functions are distinguished by
the kinds of operands, e.g., register + immediate = "ri". The FastISel emitter
has been blindly emitting functions with different combinations of operand
kinds, even for combinations that are completely unused by FastISel, e.g.,
"fastEmit_rrr". Change to filter out functions that will be irrelevant for
FastISel and do not bother generating the code for them. Also add explicit
"override" keywords for the virtual functions that are overridden.
llvm-svn: 218838
No functionality change intended.
This implements Elena's idea to put the new additionalOperand outside the
switch to cover all cases
(http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140929/237763.html).
Note only nontrivial change is in MRMSrcMemFrm. This requires an inclusive
interval of [2, 4] because we have prefix-dependent *optional* immediate
operand.
llvm-svn: 218790
Summary:
The N32/N64 ABI's require that structs passed in registers are laid out
such that spilling the register with 'sd' places the struct at the lowest
address. For little endian this is trivial but for big-endian it requires
that structs are shifted into the upper bits of the register.
We also require that structs passed in registers have the 'inreg'
attribute for big-endian N32/N64 to work correctly. This is because the
tablegen-erated calling convention implementation only has access to the
lowered form of struct arguments (one or more integers of up to 64-bits
each) and is unable to determine the original type.
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5286
llvm-svn: 218451
parsing (and latent bug in the instruction definitions).
This is effectively a revert of r136287 which tried to address
a specific and narrow case of immediate operands failing to be accepted
by x86 instructions with a pretty heavy hammer: it introduced a new kind
of operand that behaved differently. All of that is removed with this
commit, but the test cases are both preserved and enhanced.
The core problem that r136287 and this commit are trying to handle is
that gas accepts both of the following instructions:
insertps $192, %xmm0, %xmm1
insertps $-64, %xmm0, %xmm1
These will encode to the same byte sequence, with the immediate
occupying an 8-bit entry. The first form was fixed by r136287 but that
broke the prior handling of the second form! =[ Ironically, we would
still emit the second form in some cases and then be unable to
re-assemble the output.
The reason why the first instruction failed to be handled is because
prior to r136287 the operands ere marked 'i32i8imm' which forces them to
be sign-extenable. Clearly, that won't work for 192 in a single byte.
However, making thim zero-extended or "unsigned" doesn't really address
the core issue either because it breaks negative immediates. The correct
fix is to make these operands 'i8imm' reflecting that they can be either
signed or unsigned but must be 8-bit immediates. This patch backs out
r136287 and then changes those places as well as some others to use
'i8imm' rather than one of the extended variants.
Naturally, this broke something else. The custom DAG nodes had to be
updated to have a much more accurate type constraint of an i8 node, and
a bunch of Pat immediates needed to be specified as i8 values.
The fallout didn't end there though. We also then ceased to be able to
match the instruction-specific intrinsics to the instructions so
modified. Digging, this is because they too used i32 rather than i8 in
their signature. So I've also switched those intrinsics to i8 arguments
in line with the instructions.
In order to make the intrinsic adjustments of course, I also had to add
auto upgrading for the intrinsics.
I suspect that the intrinsic argument types may have led everything down
this rabbit hole. Pretty happy with the result.
llvm-svn: 217310
This is the final round of renaming. This changes tblgen to emit lower-case
function names for FastEmitInst_* and FastEmit_*, and updates all its uses
in the source code.
Reviewed by Eric
llvm-svn: 217075
This removes static initializers from the backends which generate this data, and also makes this struct match the other Tablegen generated structs in behaviour
Reviewed by Andy Trick and Chandler C
llvm-svn: 216919
This patch adds a new property: isInsertSubreg and the related target hooks:
TargetIntrInfo::getInsertSubregInputs and
TargetInstrInfo::getInsertSubregLikeInputs to specify that a target specific
instruction is a (kind of) INSERT_SUBREG.
The approach is similar to r215394.
<rdar://problem/12702965>
llvm-svn: 216139
This patch adds a new property: isExtractSubreg and the related target hooks:
TargetIntrInfo::getExtractSubregInputs and
TargetInstrInfo::getExtractSubregLikeInputs to specify that a target specific
instruction is a (kind of) EXTRACT_SUBREG.
The approach is similar to r215394.
<rdar://problem/12702965>
llvm-svn: 216130
ARM in particular is getting dangerously close to exceeding 32 bits worth of
possible subtarget features. When this happens, various parts of MC start to
fail inexplicably as masks get truncated to "unsigned".
Mostly just refactoring at present, and there's probably no way to test.
llvm-svn: 215887
Add header guards to files that were missing guards. Remove #endif comments
as they don't seem common in LLVM (we can easily add them back if we decide
they're useful)
Changes made by clang-tidy with minor tweaks.
llvm-svn: 215558
This patch adds a new property: isRegSequence and the related target hooks:
TargetIntrInfo::getRegSequenceInputs and
TargetInstrInfo::getRegSequenceLikeInputs to specify that a target specific
instruction is a (kind of) REG_SEQUENCE.
<rdar://problem/12702965>
llvm-svn: 215394
be deleted. This will be reapplied as soon as possible and before
the 3.6 branch date at any rate.
Approved by Jim Grosbach, Lang Hames, Rafael Espindola.
This reverts commits r215111, 215115, 215116, 215117, 215136.
llvm-svn: 215154
I am sure we will be finding bits and pieces of dead code for years to
come, but this is a good start.
Thanks to Lang Hames for making MCJIT a good replacement!
llvm-svn: 215111
to get the subtarget and that's accessible from the MachineFunction
now. This helps clear the way for smaller changes where we getting
a subtarget will require passing in a MachineFunction/Function as
well.
llvm-svn: 214988
This is useful for cases when stand-alone patterns are preferred to the
patterns included in the instruction definitions. Instead of requiring
that stand-alone patterns set a larger AddedComplexity value, which
can be confusing to new developers, the allows us to reduce the
complexity of the included patterns to achieve the same result.
There will be test cases for this added to the R600 backend in a
future commit.
llvm-svn: 214466
This allows assembling the two new instructions, encls and enclu for the
SKX processor model.
Note the diffs are a bigger than what might think, but to fit the new
MRM_CF and MRM_D7 in things in the right places things had to be
renumbered and shuffled down causing a bit more diffs.
rdar://16228228
llvm-svn: 214460
address of the stack guard was being spilled to the stack.
Previously the address of the stack guard would get spilled to the stack if it
was impossible to keep it in a register. This patch introduces a new target
independent node and pseudo instruction which gets expanded post-RA to a
sequence of instructions that load the stack guard value. Register allocator
can now just remat the value when it can't keep it in a register.
<rdar://problem/12475629>
llvm-svn: 213967
file not in the test/ area). Backing out now so that this test isn't part of
the 3.5 branch.
Original commit message: "TableGen: Allow AddedComplexity values to be negative
[...]"
llvm-svn: 213596
This is useful for cases when stand-alone patterns are preferred to the
patterns included in the instruction definitions. Instead of requiring
that stand-alone patterns set a larger AddedComplexity value, which
can be confusing to new developers, the allows us to reduce the
complexity of the included patterns to achieve the same result.
llvm-svn: 213521
It's also possible to just write "= nullptr", but there's some question
of whether that's as readable, so I leave it up to authors to pick which
they prefer for now. If we want to discuss standardizing on one or the
other, we can do that at some point in the future.
llvm-svn: 213438
Speculative fix for a -Wframe-larger-than warning from gcc. Clang will
implicitly promote such constant arrays to globals, so in theory it
won't hit this.
llvm-svn: 213298
There are two parts here. First is to modify tablegen to adjust the encoding
type ENCODING_RM with the scaling factor.
The second is to use the new encoding types to compute the correct
displacement in the decoder.
Fixes <rdar://problem/17608489>
llvm-svn: 213281