* Add GOMP versioned pause functions
* Add GOMP versioned affinity format functions
To do the affinity format functions, only attach versioned symbols
to the APPEND Fortran entries (e.g., omp_set_affinity_format_) since
GOMP only exports two symbols (one for Fortran, one for C). Our
affinity format functions have three symbols.
e.g., with omp_set_affinity_format:
1) omp_set_affinity_format (Fortran interface)
2) omp_set_affinity_format_ (Fortran interface)
3) ompc_set_affinity_format (C interface)
Have the GOMP version of the C symbol alias the ompc_* 3) version
instead of the Fortran unappended version 1).
Differential Revision: https://reviews.llvm.org/D103647
Remove strange checks for syscall() arguments where mask is NULL.
Valgrind reports these as error usages for the syscall.
Instead, just check if CACHE_LINE bytes is long enough. If not, then
search for the size. Also, by limiting the first size detection
attempt to CACHE_LINE bytes, instead of 1MB, we don't use more than one
cache line for the mask size. Before this patch, sometimes the returned
mask size was 640 bytes (10 cache lines) because the initial call to
getaffinity() was limited only by the internal kernel mask size
which can be very large.
Differential Revision: https://reviews.llvm.org/D103637
Lazily set affinity for root threads. Previously, the root thread
executing middle initialization would attempt to assign affinity
to other existing root threads. This was not working properly as the
set_system_affinity() function wasn't setting the affinity for the
target thread. Instead, the middle init thread was resetting the
its own affinity using the target thread's affinity mask.
Differential Revision: https://reviews.llvm.org/D103625
This patch builds on D100521 and other related patches to add support
for unwinding stack on AArch64 systems with pointer authentication
feature enabled.
We override FixCodeAddress and FixDataAddress function in ABISysV_arm64
class. We now try to calculate and set code and data masks after reading
data_mask and code_mask registers exposed by AArch64 targets running Linux.
This patch utilizes core file linux-aarch64-pac.core for testing that
LLDB can successfully unwind stack frames in the presence of signed
return address after masking off ignored bits.
This patch also includes a AArch64 Linux native test case to demonstrate
successful back trace calculation in presence of pointer authentication
feature.
Differential Revision: https://reviews.llvm.org/D99944
Some instructions are not defined well enough within the target’s scheduling
model for llvm-mca to be able to properly simulate its behaviour. The ideal
solution to this situation is to modify the scheduling model, but that’s not
always a viable strategy. Maybe other parts of the backend depend on that
instruction being modelled the way that it is. Or maybe the instruction is quite
complex and it’s difficult to fully capture its behaviour with tablegen. The
CustomBehaviour class (which I will refer to as CB frequently) is designed to
provide intuitive scaffolding for developers to implement the correct modelling
for these instructions.
Implementation details:
llvm-mca does its best to extract relevant register, resource, and memory
information from every MCInst when lowering them to an mca::Instruction. It then
uses this information to detect dependencies and simulate stalls within the
pipeline. For some instructions, the information that gets captured within the
mca::Instruction is not enough for mca to simulate them properly. In these
cases, there are two main possibilities:
1. The instruction has a dependency that isn’t detected by mca.
2. mca is incorrectly enforcing a dependency that shouldn’t exist.
For the rest of this discussion, I will be focusing on (1), but I have put some
thought into (2) and I may revisit it in the future.
So we have an instruction that has dependencies that aren’t picked up by mca.
The basic idea for both pipelines in mca is that when an instruction wants to be
dispatched, we first check for register hazards and then we check for resource
hazards. This is where CB is injected. If no register or resource hazards have
been detected, we make a call to CustomBehaviour::checkCustomHazard() to give
the target specific CB the chance to detect and enforce any custom dependencies.
The return value for checkCustomHazaard() is an unsigned int representing the
(minimum) number of cycles that the instruction needs to stall for. It’s fine to
underestimate this value because when StallCycles gets down to 0, we’ll end up
checking for all the hazards again before the instruction is actually
dispatched. However, it’s important not to overestimate the value and the more
accurate your estimate is, the more efficient mca’s execution can be.
In general, for checkCustomHazard() to be able to detect these custom
dependencies, it needs information about the current instruction and also all of
the instructions that are still executing within the pipeline. The mca pipeline
uses mca::Instruction rather than MCInst and the current information encoded
within each mca::Instruction isn’t sufficient for my use cases. I had to add a
few extra attributes to the mca::Instruction class and have them get set by the
MCInst during instruction building. For example, the current mca::Instruction
doesn’t know its opcode, and it also doesn’t know anything about its immediate
operands (both of which I had to add to the class).
With information about the current instruction, a list of all currently
executing instructions, and some target specific objects (MCSubtargetInfo and
MCInstrInfo which the base CB class has references to), developers should be
able to detect and enforce most custom dependencies within checkCustomHazard. If
you need more information than is present in the mca::Instruction, feel free to
add attributes to that class and have them set during the lowering sequence from
MCInst.
Fortunately, in the in-order pipeline, it’s very convenient for us to pass these
arguments to checkCustomHazard. The hazard checking is taken care of within
InOrderIssueStage::canExecute(). This function takes a const InstRef as a
parameter (representing the instruction that currently wants to be dispatched)
and the InOrderIssueStage class maintains a SmallVector<InstRef, 4> which holds
all of the currently executing instructions. For the out-of-order pipeline, it’s
a bit trickier to get the list of executing instructions and this is why I have
held off on implementing it myself. This is the main topic I will bring up when
I eventually make a post to discuss and ask for feedback.
CB is a base class where targets implement their own derived classes. If a
target specific CB does not exist (or we pass in the -disable-cb flag), the base
class is used. This base class trivially returns 0 from its checkCustomHazard()
implementation (meaning that the current instruction needs to stall for 0 cycles
aka no hazard is detected). For this reason, targets or users who choose not to
use CB shouldn’t see any negative impacts to accuracy or performance (in
comparison to pre-patch llvm-mca).
Differential Revision: https://reviews.llvm.org/D104149
We can look through invariant group intrinsics for the purposes of
simplifying the result of a load.
Since intrinsics can't be constants, but we also don't want to
completely rewrite load constant folding, we convert the load operand to
a constant. For GEPs and bitcasts we just treat them as constants. For
invariant group intrinsics, we treat them as a bitcast.
Relanding with a check for self-referential values.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D101103
The index cast operation accepts vector types. Implement its lowering in this patch.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D104280
c98ebda325 Rename fp-op fusion option (yet
again) for compatibility with GCC option.
The comment in the header should be updated too to avoid confusion.
This has been broken out of D104170 since it should be merged whether or
not we go ahead with the module map changes.
Differential Revision: https://reviews.llvm.org/D104175
Moving the definition of the defineXLCompatMacros function from
the header file clang/lib/Basic/Targets/PPC.h to the source file
clang/lib/Basic/Targets/PPC.cpp.
Differential revision: https://reviews.llvm.org/D104125
We have added STXVP/LXVP for spilling and restoring the registers
but we neglected to add FI elimination code for these. The result
is that we end up producing impossible MachineInstr's that have
register operands in place of immediates.
Remove the compatibility spellings of `OF_{None,Text,Append}` that
were left behind by 1f67a3cba9.
No functionality change here, just an API cleanup.
Differential Revision: https://reviews.llvm.org/D101506
Fixes:
- PR36507 Floating point varargs are not handled correctly with
-mno-implicit-float
- PR48528 __builtin_va_start assumes it can pass SSE registers
when using -Xclang -msoft-float -Xclang -no-implicit-float
On x86_64, floating-point parameters are normally passed in XMM
registers. For va_start, we spill those to memory so va_arg can
find them. There is an interaction here with -msoft-float and
-no-implicit-float:
When -msoft-float is in effect, instead of passing floating-point
parameters in XMM registers, they are passed in general-purpose
registers.
When -no-implicit-float is in effect, it "disables implicit
floating-point instructions" (per the LangRef). The intended
effect is to not have the compiler generate floating-point code
unless explicit floating-point operations are present in the
source code, but what exactly counts as an explicit floating-point
operation is not specified. The existing behavior of LLVM here has
led to some surprises and PRs.
This change modifies the behavior as follows:
| soft | no-implicit | old behavior | new behavior |
| no | no | spill XMM regs | spill XMM regs |
| yes | no | don't spill XMM | don't spill XMM |
| no | yes | don't spill XMM | spill XMM regs |
| yes | yes | assert | don't spill XMM |
In particular, this avoids the assert that happens when
-msoft-float and -no-implicit-float are both in effect. This
seems like a perfectly reasonable combination: If we don't want
to rely on hardware floating-point support, we want to both
avoid using float registers to pass parameters and avoid having
the compiler generate floating-point code that wasn't in the
original program. Instead of crashing the compiler, the new
behavior is to not synthesize floating-point code in this
case. This fixes PR48528.
The other interesting case is when -no-implicit-float is in
effect, but -msoft-float is not. In that case, any floating-point
parameters that are present will be in XMM registers, and so we
have to spill them to correctly handle those. This fixes
PR36507. The spill is conditional on %al indicating that
parameters are present in XMM registers, so no floating-point
code will be executed unless the function is called with
floating-point parameters.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D104001
Similar to SHADOW_OFFSET on asan, we can use this for hwasan so platforms that
use a constant value for the start of shadow memory can just use the constant
rather than access a global.
Differential Revision: https://reviews.llvm.org/D104275
Addition of this pass has been botched.
There is no particular reason why it had to be sold as an inseparable part
of new-pm transition. It was added when old-pm was still the default,
and very *very* few users were actually tracking new-pm,
so it's effects weren't measured.
Which means, some of the turnoil of the new-pm transition
are actually likely regressions due to this pass.
Likewise, there has been a number of post-commit feedback
(post new-pm switch), namely
* https://reviews.llvm.org/D37467#2787157 (regresses HW-loops)
* https://reviews.llvm.org/D37467#2787259 (should not be in middle-end, should run after LSR, not before)
* https://reviews.llvm.org/D95789 (an attempt to fix bad loop backedge metadata)
and in the half year past, the pass authors (google) still haven't found time to respond to any of that.
Hereby it is proposed to backout the pass from the pipeline,
until someone who cares about it can address the issues reported,
and properly start the process of adding a new pass into the pipeline,
with proper performance evaluation.
Furthermore, neither google nor facebook reports any perf changes
from this change, so i'm dropping the pass completely.
It can always be re-reverted should/if anyone want to pick it up again.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D104099
DWARF doesn't describe templates itself but only actual template instantiations.
Because of that LLDB has to infer the parameters of the class template
declarations from the actual instantiations when creating the internal Clang AST
from debug info
Because there is no dedicated DIE for the class template, LLDB also creates the
`ClassTemplateDecl` implicitly when parsing a template instantiation. To avoid
creating one ClassTemplateDecls for every instantiation,
`TypeSystemClang::CreateClassTemplateDecl` will check if there is already a
`ClassTemplateDecl` in the requested `DeclContext` and will reuse a found
fitting declaration.
The logic that checks if a found class template fits to an instantiation is
currently just comparing the name of the template. So right now we map
`template<typename T> struct S;` to an instantiation with the values `S<1, 2,
3>` even though they clearly don't belong together.
This causes crashes later on when for example the Itanium mangler's
`TemplateArgManglingInfo::needExactType` method tries to find fitting the class
template parameter that fits to an instantiation value. In the example above it
will try to find the parameter for the value `2` but will just trigger a
boundary check when retrieving the parameter with index 1 from the class
template.
There are two ways we can end up with an instantiation that doesn't fit to a
class template with the same name:
1. We have two TUs with two templates that have the same name and internal
linkage.
2. A forward declared template instantiation is emitted by GCC and Clang
without an empty list of parameter values.
This patch makes the check for whether a class template declaration can be
reused more sophisticated by also comparing whether the parameter values can fit
to the found class template. If we can't find a fitting class template we
justcreate a second class template with the fitting parameters.
Fixes rdar://76592821
Reviewed By: kastiglione
Differential Revision: https://reviews.llvm.org/D100662
This commit adds nodes that might not always be used, which the
expensive checks builder does not like. Reverting for now to think up a
better way of handling it.
Pointee types are going away soon.
For this, we mostly just care about store/load types, which are already
available without the pointee types. The other intrinsics always use
i8*.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D103719
As a minor adjustment to the existing lowering of offset scatters, this
extends any smaller-than-legal vectors into full vectors using a zext,
so that the truncating scatters can be used. Due to the way MVE
legalizes the vectors this should be cheap in most situations, and will
prevent the vector from being scalarized.
Differential Revision: https://reviews.llvm.org/D103704
The initial scan occurring before the watcher is ready allows a race
condition where a change occurs before the initial scan completes.
Ensure that we wait for the watcher to begin executing the initial scan.
Addresses some feedback from Adrian McCarthy in post-commit review.
A pointer will always fit into an i32, so a rq offset gather/scatter can
be used with v4i8 and v4i16 gathers, using a base of 0 and the Ptr as
the offsets. The rq gather can then correctly extend the type, allowing
us to use the gathers without falling back to scalarizing.
This patch rejigs tryCreateMaskedGatherOffset in the
MVEGatherScatterLowering pass to decompose the Ptr into Base:0 +
Offset:Ptr (with a scale of 1), if the Ptr could not be decomposed from
a GEP. v4i32 gathers will already use qi gathers, this extends that to
v4i8 and v4i16 gathers using the extending rq variants.
Differential Revision: https://reviews.llvm.org/D103674
This adds support for functions outlined by the IR Outliner to be
recognized by the debugger. The expected behavior is that it will
skip over the instructions included in that section. This is due to the
fact that we can not say which of the original locations the
instructions originated from.
These functions will show up in the call stack, but you cannot step
through them.
Reviewers: paquette, vsk, djtodoro
Differential Revision: https://reviews.llvm.org/D87302
This patch adds the 4th Fortran specific semantic check for the OpenMP
allocate directive: "If a list item has the SAVE attribute, is a common
block name, or is declared in the scope of a module, then only predefined
memory allocator parameters can be used in the allocator clause".
Code in this patch was based on code from https://reviews.llvm.org/D93549/new/.
Differential Revision: https://reviews.llvm.org/D102400
https://eel.is/c++draft/atomics.types.operations#23 says: ... the value of failure is order except that a value of `memory_order::acq_rel` shall be replaced by the value `memory_order::acquire` and a value of `memory_order::release` shall be replaced by the value `memory_order::relaxed`.
This failure mapping is only handled for `_LIBCPP_HAS_GCC_ATOMIC_IMP`. We are seeing bad code generation for `compare_exchange_strong(cmp, 1, std::memory_order_acq_rel)` when using libc++ in place of libstdc++: https://godbolt.org/z/v3onrrq4G.
This was caught by tsan tests after D99434, `[TSAN] Honor failure memory orders in AtomicCAS`, but appears to be an issue in non-tsan code.
Reviewed By: ldionne, dvyukov
Differential Revision: https://reviews.llvm.org/D103846
Resubmission of D100646 now making sure that we handle cases were `__builtin_memcpy_inline` is not available.
Original commit message:
Each of these elementary operations can be assembled to support higher order constructs (Overlapping access, Loop, Aligned Loop).
The patch does not compile yet as it depends on other ones (D100571, D100631) but it allows to get the conversation started.
A self-contained version of this code is available at https://godbolt.org/z/e1x6xdaxM
This adjusts some of how the gather/scatter lowering pass passes around
data and where certain gathers/scatters are created from. It should not
effect code generation on its own, but allows other patches to more
clearly reason about the code.
A number of extra test cases were also added for smaller gathers/
scatters that can be extended, and some of the test comments were
updated.
As was reported on PR50620, the X86LbrCounter destructor was double-closing the filedescriptor and not unmapping the buffer.
Differential Revision: https://reviews.llvm.org/D104201
It may be desirable to provide an interface implementation for an attribute or
a type without modifying the definition of said attribute or type. Notably,
this allows to implement interfaces for attributes and types outside of the
dialect that defines them and, in particular, provide interfaces for built-in
types. Provide the mechanism to do so.
Currently, separable registration requires the attribute or type to have been
registered with the context, i.e. for the dialect containing the attribute or
type to be loaded. This can be relaxed in the future using a mechanism similar
to delayed dialect interface registration.
See https://llvm.discourse.group/t/rfc-separable-attribute-type-interfaces/3637
Depends On D104233
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D104234
The patch replaces the existing capture functionality by scalar operands that have been introduced by https://reviews.llvm.org/D104109. Scalar operands behave as tensor operands except for the fact that they are not indexed. As a result ScalarDefs can be accessed directly as no indexing expression is needed.
The patch only updates the OpDSL. The C++ side is updated by a follow up patch.
Differential Revision: https://reviews.llvm.org/D104220
This has already been implemented in be2277fbf2 which adds
pragma fp support. This patch just adds test coverage for
regular fast-math flags (PR46165).
This is part of an effort to reduce the differences between the custom C++ bindings used right now by polly in `lib/External/isl/include/isl/isl-noxceptions.h` and the official isl C++ interface.
Changes made:
- Removing method `to_str()` from all the classes in the isl C++ bindings.
- Overload method `stringFromIslObj()` so it accepts isl C++ objects.
- To keep backward compatibility `stringFromIslObj()` now accepts a value that is returned if the isl C object is `null` or doesn't have a string representation (by default it's an empty string). In some cases it's better to have the string "null" instead of an empty string.
- isl-noexceptions.h has been generated by this d33ec3a3bb
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D104211
Need to capture locals in aligned clauses for the combined directives to
be fix the crash in the codegen.
Differential Revision: https://reviews.llvm.org/D104258