The current table-generated assembly instruction matcher returns a
64-bit error code when matching fails. Since multiple instruction
encodings with the same mnemonic can fail for different reasons, it uses
some heuristics to decide which message is important.
This heuristic does not work well for targets that have many encodings
with the same mnemonic but different operands, or which have different
versions of instructions controlled by subtarget features, as it is hard
to know which encoding the user was intending to use.
Instead of trying to improve the heuristic in the table-generated
matcher, this patch changes it to report a list of near-miss encodings.
This list contains an entry for each encoding with the correct mnemonic,
but with exactly one thing preventing it from being valid. This thing
could be a single invalid operand, a missing target feature or a failed
target-specific validation function.
The target-specific assembly parser can then report an error message
giving multiple options for instruction variants that the user may have
been trying to use. For example, I am working on a patch to use this for
ARM, which can give this error for an invalid instruction for ARMv6-M:
<stdin>:8:3: error: invalid instruction, multiple near-miss encodings found
adds r0, r1, #0x8
^
<stdin>:8:3: note: for one encoding: instruction requires: thumb2
adds r0, r1, #0x8
^
<stdin>:8:16: note: for one encoding: expected an integer in range [0, 7]
adds r0, r1, #0x8
^
<stdin>:8:16: note: for one encoding: expected a register in range [r0, r7]
adds r0, r1, #0x8
^
This also allows the target-specific assembly parser to apply its own
heuristics to suppress some errors. For example, the error "instruction
requires: arm-mode" is never going to be useful when targeting an
M-profile architecture (which does not have ARM mode).
This patch just adds the target-independent mechanism for doing this,
all targets still use the old mechanism. I've added a bit in the
AsmParser tablegen class to allow targets to switch to this new
mechanism. To use this, the target-specific assembly parser will have to
be modified for the change in signature of MatchInstructionImpl, and to
report errors based on the list of near-misses.
Differential revision: https://reviews.llvm.org/D27620
llvm-svn: 314774
This adds some more debug messages to the type legalizer and functions
like PromoteNode, ExpandNode, ExpandLibCall in an attempt to make
the debug messages a little bit more informative and useful.
Differential Revision: https://reviews.llvm.org/D38450
llvm-svn: 314773
This was previously being silently dropped by obj2yaml and caused
parsing errors with yaml2obj.
Differential Revision: https://reviews.llvm.org/D38490
llvm-svn: 314768
This makes sure the LSDA pointer isn't truncated to 32 bit.
Make LowerINTRINSIC_WO_CHAIN a member function instead of a static
function, so that it can use the getGlobalWrapperKind method.
This solves the second half of the issues mentioned in PR34720.
Differential Revision: https://reviews.llvm.org/D38343
llvm-svn: 314767
Summary:
This change removes the dependency on using a std::deque<...> for the
storage of the buffers in the buffer queue. We instead implement a
fixed-size circular buffer that's resilient to exhaustion, and preserves
the semantics of the BufferQueue.
We're moving away from using std::deque<...> for two reasons:
- We want to remove dependencies on the STL for data structures.
- We want the data structure we use to not require re-allocation in
the normal course of operation.
The internal implementation of the buffer queue uses heap-allocated
arrays that are initialized once when the BufferQueue is created, and
re-uses slots in the buffer array as buffers are returned in order.
We also change the lock used in the implementation to a spinlock
instead of a blocking mutex. We reason that since the release operations
now take very little time in the critical section, that a spinlock would
be appropriate.
This change is related to D38073.
Reviewers: dblaikie, kpw, pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38119
llvm-svn: 314766
Summary:
We avoid using C++11's thread_local keyword on non-trivially
destructible objects because it may introduce deadlocks when the C++
runtime registers destructors calling std::malloc(...). The deadlock may
happen when the allocator implementation is itself XRay instrumented.
To avoid having to call malloc(...) and free(...) in particular, we use
pthread_once, pthread_create_key, and pthread_setspecific to instead
manually register the cleanup implementation we want.
The code this replaces used an RAII type that implements the cleanup
functionality in the destructor, that was then initialized as a
function-local thread_local object. While it works in usual situations,
unfortunately it breaks when using a malloc implementation that itself
is XRay-instrumented.
Reviewers: dblaikie, kpw, pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38073
llvm-svn: 314764
Summary:
When checking if a constant expression is a noop cast we fetched the
IntPtrType by doing DL->getIntPtrType(V->getType())). However, there can
be cases where V doesn't return a pointer, and then getIntPtrType()
triggers an assertion.
Now we pass DataLayout to isNoopCast so the method itself can determine
what the IntPtrType is.
Reviewers: arsenm
Reviewed By: arsenm
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D37894
llvm-svn: 314763
Apparently this works by virtue of the fact that the pointers are pointers to the APInts stored inside of the ConstantInt objects. But I really don't think we should be relying on that.
llvm-svn: 314761
Previous code was a bit puzzling because of its use of pointers.
In this patch, we pass a vector and its offsets, instead of pointers to
vector elements.
llvm-svn: 314756
Don't emit alignment checks which the IR constant folder throws away.
I've tested this out on X86FastISel.cpp. While this doesn't decrease
end-to-end compile-time significantly, it results in 122 fewer type
checks (1% reduction) overall, without adding any real complexity.
Differential Revision: https://reviews.llvm.org/D37544
llvm-svn: 314752
- it made the bots v angry!
I'm not exactly sure why the assertion doesn't hold - if anyone has any insight - would appreciate it.
Thanks!
llvm-svn: 314748
In passing:
- change the name of the function to pasteTokens c/w coding standards
- rename CurToken to CurTokenIdx (since it is not the token, but the index)
- add doxygen comments to document some of pasteTokens' functionality
- use parameter names different from the data member names.
This will be useful for implementing __VA_OPT__ (https://reviews.llvm.org/D35782#inline-322587)
llvm-svn: 314747
Move error handling code next to the code that returns the error,
and change the error message in order to distinguish it from a similar
error message elsewhere in this file.
llvm-svn: 314745
These are problematic because they apply to everything,
and can easily clobber whatever more specific predicate
you are trying to add to a function.
Currently instructions use SubtargetPredicate/PredicateControl
to apply this to patterns applied to an instruction definition,
but not to free standing Pats. Add a wrapper around Pat
so the special PredicateControls requirements can be appended
to the final predicate list like how Mips does it.
llvm-svn: 314742
Call ConstantFoldSelectInstruction() to fold cases like below
select <2 x i1><i1 true, i1 false>, <2 x i8> <i8 0, i8 1>, <2 x i8> <i8 2, i8 3>
All operands are constants and the condition has mixed true and false conditions.
Differential Revision: https://reviews.llvm.org/D38369
llvm-svn: 314741
Previously LIT would often fail while attempting to set up/configure
the test compiler; normally when attempting to dump the builtin macros.
This sort of failure provided no useful information about what went
wrong with the compiler, making the actual issues hard --- if not
impossible --- to debug easily.
This patch changes the LIT configuration to report the failure explicitly,
including the failed compile command and the stdout/stderr output.
llvm-svn: 314735
Summary:
This avoids using void * as the type of the lattice value and ugly casts needed to make that happen.
(If folks want to use references, etc, they can use a reference_wrapper).
Reviewers: davide, mssimpso
Subscribers: sanjoy, llvm-commits
Differential Revision: https://reviews.llvm.org/D38476
llvm-svn: 314734
Issues addressed since original review:
- Avoid bug in regalloc greedy/machine verifier when forwarding to use
in an instruction that re-defines the same virtual register.
- Fixed bug when forwarding to use in EarlyClobber instruction slot.
- Fixed incorrect forwarding to register definitions that showed up in
explicit_uses() iterator (e.g. in INLINEASM).
- Moved removal of dead instructions found by
LiveIntervals::shrinkToUses() outside of loop iterating over
instructions to avoid instructions being deleted while pointed to by
iterator.
- Fixed ARMLoadStoreOptimizer bug exposed by this change in r311907.
- The pass no longer forwards COPYs to physical register uses, since
doing so can break code that implicitly relies on the physical
register number of the use.
- The pass no longer forwards COPYs to undef uses, since doing so
can break the machine verifier by creating LiveRanges that don't
end on a use (since the undef operand is not considered a use).
[MachineCopyPropagation] Extend pass to do COPY source forwarding
This change extends MachineCopyPropagation to do COPY source forwarding.
This change also extends the MachineCopyPropagation pass to be able to
be run during register allocation, after physical registers have been
assigned, but before the virtual registers have been re-written, which
allows it to remove virtual register COPY LiveIntervals that become dead
through the forwarding of all of their uses.
llvm-svn: 314729