The constructor of InOrderIssueStage no longer takes as input a reference to the
target scheduling model. The stage can always query the subtarget to obtain a
reference to the scheduling model.
The ResourceManager is no longer stored internally as a unique_ptr.
Moved a couple of method definitions to the .cpp file.
MSVC-style RTTI produces loads through a GEP of a local alias which
itself is a GEP. Currently we aren't able to devirtualize any virtual
calls when MSVC RTTI is enabled.
This patch attempts to simplify a load's GEP operand by calling
SymbolicallyEvaluateGEP() with an option to look through local aliases.
Differential Revision: https://reviews.llvm.org/D101100
If an instruction's AVL operand is a PHI node in the same block,
we may be able to peek through the PHI to find vsetvli instructions
that produce the AVL in other basic blocks. If we can prove those
vsetvli instructions have the same VTYPE and were the last vsetvli
in their respective blocks, then we don't need to insert a vsetvli
for this pseudo instruction.
Reviewed By: rogfer01
Differential Revision: https://reviews.llvm.org/D103277
Arguments need to have the proper ABI parameter attributes set.
Followup to D101806.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D103288
MLIR tools very commonly use `// -----` to split a file into distinct sub documents, that are processed separately. This revision adds support to mlir-lsp-server for splitting MLIR files based on this sigil, and processing them separately.
Differential Revision: https://reviews.llvm.org/D102660
Moved the logic that checks for RAW hazards from the InOrderIssueStage to the
RegisterFile.
Changed how the InOrderIssueStage keeps track of backend stalls. Stall events
are now generated from method notifyStallEvent().
No functional change intended.
This is the first in a series of patches to provide builtins for
compatibility with the XL compiler. Most of the builtins already had
intrinsics and only needed to be implemented in the front end.
Intrinsics were created for the three iospace builtins, eieio, and icbt.
Pseudo instructions were created for eieio and iospace_eieio to
ensure that nops were inserted before the eieio instruction.
Reviewed By: nemanjai, #powerpc
Differential Revision: https://reviews.llvm.org/D102443
Ghashing is probably going to be faster in most cases, even without
precomputed ghashes in object files.
Here is my table of results linking clang.pdb:
-------------------------------
| threads | GHASH | NOGHASH |
-------------------------------
| j1 | 51.031s | 25.141s |
| j2 | 31.079s | 22.109s |
| j4 | 18.609s | 23.156s |
| j8 | 11.938s | 21.984s |
| j28 | 8.375s | 18.391s |
-------------------------------
This shows that ghashing is faster if at least four cores are available.
This may make the linker slower if most cores are busy in the middle of
a build, but in that case, the linker probably isn't on the critical
path of the build. Incremental build performance is arguably more
important than highly contended batch build link performance.
The -time output indicates that ghash computation is the dominant
factor:
Input File Reading: 924 ms ( 1.8%)
GC: 689 ms ( 1.3%)
ICF: 527 ms ( 1.0%)
Code Layout: 414 ms ( 0.8%)
Commit Output File: 24 ms ( 0.0%)
PDB Emission (Cumulative): 49938 ms ( 94.8%)
Add Objects: 46783 ms ( 88.8%)
Global Type Hashing: 38983 ms ( 74.0%)
GHash Type Merging: 5640 ms ( 10.7%)
Symbol Merging: 2154 ms ( 4.1%)
Publics Stream Layout: 188 ms ( 0.4%)
TPI Stream Layout: 18 ms ( 0.0%)
Commit to Disk: 2818 ms ( 5.4%)
--------------------------------------------------
Total Link Time: 52669 ms (100.0%)
We can speed that up with a faster content hash (not SHA1).
Differential Revision: https://reviews.llvm.org/D102888
This allows for checking if a given operation may modify/reference/or both a given value. Right now this API is limited to Value based memory locations, but we should expand this to include attribute based values at some point. This is left for future work because the rest of the AliasAnalysis API also has this restriction.
Differential Revision: https://reviews.llvm.org/D101673
These actually can be automatically imported from another DLL. (This
works properly as long as the actual implementation of emutls is
linked dynamically from e.g. libgcc; if the implementation comes from
compiler-rt or a statically linked libgcc, it doesn't work as intended.)
This fixes PR50146 and https://github.com/msys2/MINGW-packages/issues/8706
(fixing calling std::call_once in a dynamically linked libstdc++);
since f731839584 the dso_local attribute
on the TLS variable affected the actual generated code for accessing
the emutls variable.
The dso_local attribute on the emutls variable made those accesses to
use 32 bit relative addressing in code, which requires runtime pseudo
relocations in the text section, and breaks entirely if the actual
other variable ends up loaded too far away in the virtual address
space.
Differential Revision: https://reviews.llvm.org/D102970
I discovered when merging the __builtin_sycl_unique_stable_name into my
downstream that it is actually possible for the cc1 invocation to have
more than 1 Sema instance, if you pass it multiple input files, each
gets its own Sema instance and thus ASTContext instance. The result was
that the call to Filter the SYCL kernels was using an
ItaniumMangleContext stored via a 'magic static', so it had an invalid
reference to ASTContext when processing the 2nd failure.
The failure is unfortunately flakey/transient, but the test that fails
was added anyway.
The magic-static was switched to a unique_ptr member variable in
ASTContext that is initialized when needed.
Now that i've reimplemented the testcase generator
to produce actual IR (https://godbolt.org/z/s7PM8E6v9),
it turns out that this was the only discrepancy
from what the LV would produce.
in stripDebugInfo(). This patch fixes an oversight in
https://reviews.llvm.org/D96181 and also takes into account loop
metadata pointing to other MDNodes that point into the debug info.
rdar://78487175
Differential Revision: https://reviews.llvm.org/D103220
It is a reference-counted class but it uses different methods for that
and the checker doesn't understand them yet.
Differential Revision: https://reviews.llvm.org/D103081
Now that LLDB proper has built-in support for intel-pt traces, we can remove the old plugin written by Intel. It has less features and it's hard to work with.
As a test, I ran "ninja lldbIntelFeatures" and it worked.
Differential Revision: https://reviews.llvm.org/D102866
Mark the `ELFRelocationEntry::dump` method as `LLVM_DUMP_METHOD` to
annotate it properly as used to prevent the function being dead stripped
away. This allows use of `dump` in the debugger. This is purely to
improve the developer experience.
Determined from llvm-mca analysis (btver2 vs bdver2 vs sandybridge), the split+extends+concat sequence on AVX1 capable targets are cheaper than the #ops that the cost was previously based on.
This can help avoid needing a virtual register for the vsetvl output
when the AVL is X0. For other register AVLs it can shorter the live
range of the AVL register if it isn't needed later.
There's probably no advantage when AVL is a 5 bit immediate that
can use vsetivli. But do it anyway for consistency.
Reviewed By: rogfer01
Differential Revision: https://reviews.llvm.org/D103215
Depends On D103109
If any of the tokens/values added to the `!async.group` switches to the error state, than the group itself switches to the error state.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D103203
We could previously do this by accident through the later
call to getTargetConstantBitsFromNode I think, but that only worked
if N0 had a single use. This patch makes it explicit for undef and
doesn't have a use count check.
I think this is needed to move the (shl X, 1)->(add X, X)
fold to isel for PR50468. We need to be sure X won't be IMPLICIT_DEF
which might prevent the same vreg from being used for both operands.
Differential Revision: https://reviews.llvm.org/D103192
Depends On D103102
Not yet implemented:
1. Error handling after synchronous await
2. Error handling for async groups
Will be addressed in the followup PRs
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D103109
Like other sanitizers, enable __has_feature(coverage_sanitizer) if clang
has enabled at least one SanitizerCoverage instrumentation type.
Because coverage instrumentation selection is not handled via normal
-fsanitize= (and thus not in SanitizeSet), passing this information
through to LangOptions required propagating the already parsed
-fsanitize-coverage= options from CodeGenOptions through to LangOptions
in FixupInvocation().
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D103159
Support reference counted values implicitly passed (live) only to some of the successors.
Example: if branched to ^bb2 token will leak, unless `drop_ref` operation is properly created
```
^entry:
%token = async.runtime.create : !async.token
cond_br %cond, ^bb1, ^bb2
^bb1:
async.runtime.await %token
async.runtime.drop_ref %token
br ^bb2
^bb2:
return
```
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D103102
This patch changes LoopUnrollAndJamPass from FunctionPass to LoopNest pass.
The next patch will utilize LoopNest to effectively handle loop nests.
Reviewed By: Whitney
Differential Revision: https://reviews.llvm.org/D99149
As discussed in PR50385, strict-fp on PowerPC SPE has not been handled
well. This patch disables it by default for SPE.
Reviewed By: nemanjai, vit9696, jhibbits
Differential Revision: https://reviews.llvm.org/D103235
In order to allow large matmul operations using the MMA ops we need to chain
operations this is not possible unless "DOp" and "COp" type have matching
layout so remove the "DOp" layout and force accumulator and result type to
match.
Added a test for the case where the MMA value is accumulated.
Differential Revision: https://reviews.llvm.org/D103023
Adjusting the load register type is a widenScalar type action, not a
lowering. lowerLoad should be reserved for operations that change the
memory access size, such as unaligned load decomposition. With this
trying to adjust the register type, it was hard to avoid infinite
loops in the legalizer. Adds a bandaid to avoid regressing a few
AArch64 tests, but I'm not sure what the exact condition is and
there's probably a cleaner way to do this.
For AMDGPU this regresses handling of some cases for unaligned loads,
but the way this is currently working is a pretty ugly hack.
Summary:
We are going to have libc++abi.a and libunwind.a on AIX.
Add the necessary linking command to pick the libraries up.
Reviewed By: daltenty
Differential Revision: https://reviews.llvm.org/D102813
Similar to how we allow managed and asserted locks to be held and not
held in joining branches, we also allow them to be held shared and
exclusive. The scoped lock should restore the original state at the end
of the scope in any event, and asserted locks need not be released.
We should probably only allow asserted locks to be subsumed by managed,
not by (directly) acquired locks, but that's for another change.
Reviewed By: delesley
Differential Revision: https://reviews.llvm.org/D102026