Recent refactoring introduced a symbol index argument for `getFullSymbolName` method,
which is only used for reporting error messages about invalid extended symbol indexes.
There are few issues in the implementation and we don't report correct symbol indices
when dumping MIPS GOT/PLT entries currently.
This patch adds test cases and fixes the issue.
Differential revision: https://reviews.llvm.org/D88089
Currently `--relocations` ignores section symbol names and always prints
section names for them. This is inconsistent with GNU readelf and with `--symbols`.
We have a code in `getFullSymbolName` (which is used for `--symbols`) which can be
reused for `getRelocationTarget` (used for `--relocations`).
With that the issue described is fixed and code becomes a bit shorter.
Also with this change we start to print more relocations (in situations when we just
showed warnings instead before) and also start to report more diagnostic warnings
(see reloc-zero-name-or-value.test).
Differential revision: https://reviews.llvm.org/D87613
When memory operations are outstanding on function calls, either the
caller or the callee can insert a waitcnt to ensure that all reads are
finished.
Calls need some time to be executed, so if the callee inserts the
waitcnt, filling the instruction buffer and waiting for memory will be
interleaved, hiding some latency. This comes at the cost of having a
waitcnt inside functions that may not be needed as no memory operations
are outstanding.
For function calls, this is already implemented. The same principal
applies to returns: If the caller inserts a waitcnt after the call, the
callee does not have to wait and the return and memory operation can be
run in parallel.
This commit implements waiting in the caller after returning from a
function call.
Differential Revision: https://reviews.llvm.org/D87674
This:
1) Replaces pointers with references in many places.
2) Adds few TODOs about fixing possible unhandled errors (in ARMEHABIPrinter.h).
3) Replaces `auto`s with actual types.
4) Removes excessive arguments.
5) Adds `const ELFFile<ELFT> &Obj;` member to `ELFDumper` to simplify the code.
Differential revision: https://reviews.llvm.org/D88097
The signature should not be part of the summaries as many FIXME comments
suggests. By separating the signature, we open up the way to a generic
matching implementation which could be used later under the hoods of
CallDescriptionMap.
Differential Revision: https://reviews.llvm.org/D88100
It is no longer needed to add summaries of 'getline' for different
possible underlying types of ssize_t. We can just simply lookup the
type.
Differential Revision: https://reviews.llvm.org/D88092
An existing function Type::getScalarSizeInBits returns a uint64_t
instead of a TypeSize class because the caller is requesting a
scalar size, which cannot be scalable. This patch makes other
similar functions requesting a scalar size consistent with that,
thereby eliminating more than 1000 implicit TypeSize -> uint64_t
casts.
Differential revision: https://reviews.llvm.org/D87889
This reverts commit fdc41e11f9. It causes the
libcxx/modules/stds_include.sh.cpp test to fail with:
libcxx/include/ostream:1039:45: error: no template named 'enable_if_t'; did you mean 'enable_if'?
template <class _Stream, class _Tp, class = enable_if_t<
Still investigating what's causing this and reverting in the meantime to get
the bots green again.
In this patch I've fixed some warnings that arose from the implicit
cast of TypeSize -> uint64_t. I tried writing a variety of different
cases to show how this optimisation might work for scalable vectors
and found:
1. The optimisation does not work for cases where the cast type
is scalable and the allocated type is not. This because we need to
know how many times the cast type fits into the allocated type.
2. If we pass all the various checks for the case when the allocated
type is scalable and the cast type is not, then when creating the
new alloca we have to take vscale into account. This leads to
sub-optimal IR that is worse than the original IR.
3. For the remaining case when both the alloca and cast types are
scalable it is hard to find examples where the optimisation would
kick in, except for simple bitcasts, because we typically fail the
ABI alignment checks.
For now I've changed the code to bail out if only one of the alloca
and cast types is scalable. This means we continue to support the
existing cases where both types are fixed, and also the specific case
when both types are scalable with the same size and alignment, for
example a simple bitcast of an alloca to another type.
I've added tests that show we don't attempt to promote the alloca,
except for simple bitcasts:
Transforms/InstCombine/AArch64/sve-cast-of-alloc.ll
Differential revision: https://reviews.llvm.org/D87378
Fix incorrect merges of m0 inits in loops.
It was assumed that if a clobbering instruction appears in
the same block as an init and the clobbering instruction
does not dominate the init then it does not interfere with
init.
This does not work in the presence of loops, where in this
scenario, the clobbering instruction does interfere with
the init in another iteration.
To fix this, do not check for block equality and defer the
decision to the predecessor check.
Differential Revision: https://reviews.llvm.org/D87882
A sequence of two reshapes such that one of them is just adding unit
extent dims can be folded to a single reshape.
Differential Revision: https://reviews.llvm.org/D88057
In practice, this only gives modest savings (for a 6.5 MB DLL with
230 KB xdata, the xdata sections shrinks by around 2.5 KB); to
gain more, the frame lowering would need to be tweaked to more often
generate frame layouts that match the canonical layouts that can
be written in packed form.
Differential Revision: https://reviews.llvm.org/D87371
implements glibc-like wrappers over Linux syscalls.
[3/11] patch series to port ASAN for riscv64
Depends On D87998
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D87572
Some of the predicates can't always be decided - for example when a type
definition isn't available. At the same time it's necessary to let
client code decide what to do about such cases - specifically we can't
just use true or false values as there are callees with
conflicting strategies how to handle this.
This is a speculative fix for PR47276.
Differential Revision: https://reviews.llvm.org/D88133
Commit https://reviews.llvm.org/rG144e57fc9535 added this test
case that creates message queues but does not remove them. The
message queues subsequently build up on the machine until the
system wide limit is reached. This has caused failures for a
number of bots running on a couple of big PPC machines.
This patch just adds the missing cleanup.
When performing ThinLTO importing, the metadata loader attempts to lazy
load, by building an index. However, module level global decl attachment
metadata was being parsed early while building the index, since the
associated (module level) global values aren't materialized on demand.
This results in the creation of forward reference temporary metadatas,
which are expensive.
Normally, these module level global values don't have much attached
metadata. However, in the case of -fwhole-program-vtables (e.g. for
whole program devirtualization), the vtables may have many attached type
metadatas. This was resulting in very slow performance when performing
ThinLTO importing with the default lazy loading.
This patch restructures the handling of these global decl attachment
records, delaying their parsing until after the lazy loading index has
been built. Then the parser can use the interface that loads from the
index, which resolves forward references immediately instead of creating
expensive temporaries.
For one ThinLTO backend that imports from modules containing huge
numbers of vtables and associated types, I measured the following
compile times for the metadata materialization during function
importing, rounded to nearest second:
No -fwhole-program-vtables:
Lazy loading on (head): 1s
Lazy loading off (head): 3s
Lazy loading on (patch): 1s
With -fwhole-program-vtables:
Lazy loading on (head): 440s
Lazy loading off (head): 4s
Lazy loading on (patch): 2s
Differential Revision: https://reviews.llvm.org/D87970
The word "target" is overloaded, so lighten its load by using another word to denote the symbol or section to which a reloc points. While more stilted than "target", "referent" is rather less pompous than "designatum" or "denotatum". :P
Along the way, make a few neighboring variable names more descriptive.
Reviewed By: #lld-macho, int3
Differential Revision: https://reviews.llvm.org/D87584
-debug-pass is a legacy PM only option.
Some tests checks that the pass returned that it made a change,
which is not relevant to the NPM, since passes return PreservedAnalyses.
Some tests check that passes are freed at the proper time, which is also
not relevant to the NPM.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D87945
The IRSimilarityCandidate is a container to hold a region of
IRInstructions and offer interfaces for the starting instruction, ending
instruction, parent function, length. It also assigns a global value
number for each unique instance of a value in the region.
It also contains an interface to compare two IRSimilarity as to whether
they have the same sequence of similar instructions.
Tests for whether the instructions are similar are found in
unittests/Analysis/IRSimilarityIdentifierTest.cpp.
Differential Revision: https://reviews.llvm.org/D86970
The rendered html was (no hyperlink was generated):
(see Getting Started <GettingStarted.html#git-pre-push-hook>)
Now, it is (with proper hyperlink):
(see Git pre-push hook)
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D88116
A conversion from `pow` to `sqrt` shall not call an `errno`-setting
`sqrt` with -//infinity//: the `sqrt` will set `EDOM` where the `pow`
call need not.
This patch avoids the erroneous (pun not intended) transformation by
applying the restrictions discussed in the thread for
https://lists.llvm.org/pipermail/llvm-dev/2020-September/145051.html.
The existing tests are updated (depending on emphasis in the checks for
library calls, avoidance of overlap, and overall coverage):
- to add `ninf`, retaining the intended library call,
- to use the intrinsic, retaining the use of `select`, or
- to expect the replacement to not occur.
The following is tested:
- The pow intrinsic folds to a `select` instruction to
handle -//infinity//.
- The pow library call folds, with `ninf`, to `sqrt` without the
`select` instruction associated with handling -//infinity//.
- The pow library call does not fold to `sqrt` without `ninf`.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D87877
The motivation here is that MachineBlockPlacement relies on analyzeBranch to remove branches to fallthrough blocks when the branch is not fully analyzeable. With the introduction of the FAULTING_OP psuedo for implicit null checking (see D87861), this case becomes important. Note that it's hard to otherwise exercise this path as BranchFolding handle's any fully analyzeable branch sequence without using this interface.
p.s. For anyone who saw my comment in the original review, what I thought was an issue in BranchFolding originally turned out to simply be a bug in my patch. (Now fixed.)
Differential Revision: https://reviews.llvm.org/D88035