First patch in a series adding MC layer support for the Arm Scalable
Matrix Extension.
This patch adds the following features:
sme, sme-i64, sme-f64
The sme-i64 and sme-f64 flags are for the optional I16I64 and F64F64
features.
If a target supports I16I64 then the following instructions are
implemented:
* 64-bit integer ADDHA and ADDVA variants (D105570).
* SMOPA, SMOPS, SUMOPA, SUMOPS, UMOPA, UMOPS, USMOPA, and USMOPS
instructions that accumulate 16-bit integer outer products into 64-bit
integer tiles.
If a target supports F64F64 then the FMOPA and FMOPS instructions that
accumulate double-precision floating-point outer products into
double-precision tiles are implemented.
Outer products are implemented in D105571.
The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-06
Reviewed By: CarolineConcatto
Differential Revision: https://reviews.llvm.org/D105569
Don't use a local MachineOperand copy in SystemZAsmPrinter::PrintAsmOperand()
and change the register as it may break the MRI tracking of register
uses. Use an MCOperand instead.
Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D105757
This is the pattern from the description of:
https://llvm.org/PR50816
There might be a way to generalize this to a smaller or more
generic pattern, but I have not found it yet.
https://alive2.llvm.org/ce/z/ShzJoF
define i1 @src(i8 %x) {
%add = add i8 %x, -1
%xor = xor i8 %x, -1
%and = and i8 %add, %xor
%r = icmp slt i8 %and, 0
ret i1 %r
}
define i1 @tgt(i8 %x) {
%r = icmp eq i8 %x, 0
ret i1 %r
}
Annotate LinalgNamedStructuredOps.yaml with a comment stating the file is auto-generated and should not be edited manually.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D105809
Update truncation costs based on the worst case costs from the script in D103695.
Move to using legalized types wherever possible, which allows us to prune the cost tables.
This sets the latency of stores to 1 in the Cortex-A55 scheduling model,
to better match the values given in the software optimization guide.
The latency of a store in normal llvm scheduling does not appear to have
a lot of uses. If the store has no outputs then the latency is somewhat
meaningless (and pre/post increment update operands use the WriteAdr
write for those operands instead). The one place it does alter things is
the latency between a store and the end of the scheduling region, which
can in turn have an effect on the critical path length. As a result a
latency of 1 is more correct and offers ever-so-slightly better
scheduling of instructions near the end of the block.
They are marked as RetireOOO to keep the llvm-mca from introducing
stalls where non would exist.
Differential Revision: https://reviews.llvm.org/D105541
Support Narrowing conversions to bool in if constexpr condition
under C++23 language mode.
Only if constexpr is implemented as the behavior of static_assert
is already conforming. Still need to work on explicit(bool) to
complete support.
This patch fixes process event handling when the events are broadcasted
at launch. To do so, the patch introduces a new listener to fetch events
by hand off the event queue and then resending them ensure the event ordering.
Differental Revision: https://reviews.llvm.org/D105698
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
This new test demonstrates a case where a base ptr is generated
twice for the same value: the first one is generated while
the gc.get.pointer.base() is inlined, the second is generated
for the statepoint. This happens because the methods
inlineGetBaseAndOffset() and insertParsePoints() do not share
their defining value cache used by the findBasePointer() method.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D103240
This new test demonstrates a case where a base ptr is generated
twice for the same value: the first one is generated while
the gc.get.pointer.base() is inlined, the second is generated
for the statepoint. This happens because the methods
inlineGetBaseAndOffset() and insertParsePoints() do not share
their defining value cache used by the findBasePointer() method.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D103238
The function is supposed to be the equivalent of rint() (as in
round to nearest, ties to even) rather than round() (round to
nearest, ties away from zero). In fact, the instruction we emit
without VSX is vrfin which is correct. However, with VSX we emit
xvrspi which is the equivalent of round() and therefore incorrect.
Since there is no equivalent VSX instruction, simply use vrfin
regardless of availability of VSX.
sem_trywait never blocks.
Use REAL instead of COMMON_INTERCEPTOR_BLOCK_REAL.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D105775
The internal allocator adds 8-byte header for debugging purposes.
The problem with it is that it's not possible to allocate nicely-sized
objects without a significant overhead. For example, if we allocate
512-byte objects, that will be rounded up to 768 or something.
This logic migrated from tsan where it was added during initial development,
I don't remember that it ever caught anything (we don't do bugs!).
Remove it so that it's possible to allocate nicely-sized objects
without overheads.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D105777
OpenMP 5.1 added support for writing OpenMP directives using [[]]
syntax in addition to using #pragma and this introduces support for the
new syntax.
In OpenMP, the attributes take one of two forms:
[[omp::directive(...)]] or [[omp::sequence(...)]]. A directive
attribute contains an OpenMP directive clause that is identical to the
analogous #pragma syntax. A sequence attribute can contain either
sequence or directive arguments and is used to ensure that the
attributes are processed sequentially for situations where the order of
the attributes matter (remember:
https://eel.is/c++draft/dcl.attr.grammar#4.sentence-4).
The approach taken here is somewhat novel and deserves mention. We
could refactor much of the OpenMP parsing logic to work for either
pragma annotation tokens or for attribute clauses. It would be a fair
amount of effort to share the logic for both, but it's certainly
doable. However, the semantic attribute system is not designed to
handle the arbitrarily complex arguments that OpenMP directives
contain. Adding support to thread the novel parsed information until we
can produce a semantic attribute would be considerably more effort.
What's more, existing OpenMP constructs are not (often) represented as
semantic attributes. So doing this through Attr.td would be a massive
undertaking that would likely only benefit OpenMP and comes with
additional risks. Rather than walk down that path, I am taking
advantage of the fact that the syntax of the directives within the
directive clause is identical to that of the #pragma form. Once the
parser recognizes that we're processing an OpenMP attribute, it caches
all of the directive argument tokens and then replays them as though
the user wrote a pragma. This reuses the same OpenMP parsing and
semantic logic directly, but does come with a risk if the OpenMP
committee decides to purposefully diverge their pragma and attribute
syntaxes. So, despite this being a novel approach that does token
replay, I think it's actually a better approach than trying to do this
through the declarative syntax in Attr.td.
Previously, comprehensive bufferization of scf.yield did not have enough information
to detect whether an enclosing scf::for bbargs would bufferize to a buffer equivalent
to that of the matching scf::yield operand.
As a consequence a separate sanity check step would be required to determine whether
bufferization occured properly.
This late check would miss the case of calling a function in an loop.
Instead, we now pass and update aliasInfo during bufferization and it is possible to
imrpove bufferization of scf::yield and drop that post-pass check.
Add an example use case that was failing previously.
This slightly modifies the error conditions, which are also updated as part of this
revision.
Differential Revision: https://reviews.llvm.org/D105803
We use 0 for empty stack id from stack depot.
Deadlock detector 1 is the only place that uses -1
as a special case. Use 0 because there is a number
of checks of the form "if (stack id) ...".
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D105776
This adds custom lowering for truncating stores when operating on
fixed length vectors in SVE. It also includes a DAG combine to
fold extends followed by truncating stores into non-truncating
stores in order to prevent this pattern appearing once truncating
stores are supported.
Currently truncating stores are not used in certain cases where
the size of the vector is larger than the target vector width.
Differential Revision: https://reviews.llvm.org/D104471
The compile-time assertion is supposed to prevent double-free caused by
unexpected combination of preprocessor defines passed by an OMPT tool.
The current defines are not used, so this patch replaces the check with
macros actually used in ompt-multiplex.h
Reported by: Semih Burak
Differential Revision: https://reviews.llvm.org/D104633
A number of functions in the header have guards for 64-bit only
that were presumably added as some of the functions in the blocks
use vector __int128 which is only available in 64-bit mode.
A more appropriate guard (__SIZEOF_INT128__) has been added for
those functions since, making the 64-bit guards redundant.
This patch removes those guards as they inadvertently guard code
that uses vector long long which does not actually require 64-bit
mode.
Enable clang Thread Safety Analysis for sanitizers:
https://clang.llvm.org/docs/ThreadSafetyAnalysis.html
Thread Safety Analysis can detect inconsistent locking,
deadlocks and data races. Without GUARDED_BY annotations
it has limited value. But this does all the heavy lifting
to enable analysis and allows to add GUARDED_BY incrementally.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D105716
We have 3 different mutexes (RWMutex, BlockingMutex __tsan::Mutex),
each with own set of downsides. I want to unify them under a name Mutex.
But it will conflict with Mutex in the deadlock detector,
which is a way too generic name. Rename it to MutexState.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D105773
This patch adds a helper function to test target architecture is
AArch64 or not. This also tightens isAArch64* helpers by adding an
extra architecture check.
Reviewed By: DavidSpickett
Differential Revision: https://reviews.llvm.org/D105483
After the Math has been split out of the Standard dialect, the
conversion to the LLVM dialect remained as a huge monolithic pass.
This is undesirable for the same complexity management reasons as having
a huge Standard dialect itself, and is even more confusing given the
existence of a separate dialect. Extract the conversion of the Math
dialect operations to LLVM into a separate library and a separate
conversion pass.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D105702
The `-analyzer-display-progress` displayed the function name of the
currently analyzed function. It differs in C and C++. In C++, it
prints the argument types as well in a comma-separated list.
While in C, only the function name is displayed, without the brackets.
E.g.:
C++: foo(), foo(int, float)
C: foo
In crash traces, the analyzer dumps the location contexts, but the
string is not enough for `-analyze-function` in C++ mode.
This patch addresses the issue by dumping the proper function names
even in stack traces.
Reviewed By: NoQ
Differential Revision: https://reviews.llvm.org/D105708
The trait was inconsistent with the other broadcasting logic here. And
also fix printing here to use ? rather than -1 in the error.
Differential Revision: https://reviews.llvm.org/D105748
AArch64 architecture support virtual addresses with some of the top bits ignored.
These ignored bits can host memory tags or bit masks that can serve to check for
authentication of address integrity. We need to clear away the top ignored bits
from watchpoint address to reliably hit and set watchpoints on addresses
containing tags or masks in their top bits.
This patch adds support to watch tagged addresses on AArch64/Linux.
Reviewed By: DavidSpickett
Differential Revision: https://reviews.llvm.org/D101361
The patch templatize PriorityInlinerOrder so that it can accept any type priority metric.
Reviewed By: kazu
Differential Revision: https://reviews.llvm.org/D104972
* Adjust strsize so llvm-objdump doesn't complain about it extending
past the end of file
* Remove symbol that was referencing a deleted section
* Adjust n_sect of the remaining `_main` symbol to point at the right
section
In order to fold calls based on high-level knowledge and control flow
tracking it helps to expose the information as a runtime call. The
logic: `!SPMD && getTID() == getMasterTID()` was used in various places
and is now encapsulated in `__kmpc_is_generic_main_thread`. As part of
this rewrite we replaced eager computation of arguments with on-demand
computation, especially helpful if the calls can be folded and arguments
don't need to be computed consequently.
Differential Revision: https://reviews.llvm.org/D105768
In order to avoid malloc/free, up to NUM_SHARED_VARIABLES_IN_SHARED_MEM
(=64) variables are communicated in dedicated shared memory instead. The
simplification does avoid the need for an "init" and requires "deinit"
only if we ever communicate more than NUM_SHARED_VARIABLES_IN_SHARED_MEM
variables.
Differential Revision: https://reviews.llvm.org/D105767
As with other Attributor interfaces we often want to know if assumed
information was used to answer a query. This is important if only
known information is allowed or if known information can lead to an
early fixpoint. The users have been adjusted but none of them utilizes
the new information yet.
The mappings we were using had a small number of keys, so a vector is
probably better. This allows us to remove the last usage of std::map in
our codebase.
I also used `removeSimulator` to simplify the code a bit further.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D105786
The const version of VPValue::getVPValue still had a default value for
the value index. Remove the default value and use getVPSingleValue
instead, which is the proper function.