This commit adds a new plugin, GDBJITDebugInfoRegistrationPlugin, that checks
for objects containing debug info and registers any debug info found via the
GDB JIT registration API.
To enable this registration without redundantly representing non-debug sections
this plugin synthesizes a new embedded object within a section of the LinkGraph.
An allocation action is used to make the registration call.
Currently MachO only. ELF users can still use the DebugObjectManagerPlugin. The
two are likely to be merged in the near future.
The if-check above deleted part guarantees that StoreOffset <= LoadOffset
and that StoreOffset + StoreSize >= LoadOffset + LoadSize, and given that
LoadOffset + LoadSize > LoadOffset when LoadSize > 0. Thus, this shows
StoreOffset + StoreSize > LoadOffset is guaranteed given LoadSize > 0,
while it could be meaningless to have a type with nonpositive size, so that
the check could be removed.
Part of revision D100179
Reviewed By: nikic
This reverts commit f0cf544d6f.
Just a small change to fix:
```
/home/buildbot/as-builder-4/llvm-clang-x86_64-expensive-checks-ubuntu/llvm-project/llvm/lib/Support/VirtualFileSystem.cpp: In static member function ‘static llvm::ErrorOr<std::unique_ptr<llvm::vfs::File> > llvm::vfs::File::getWithPath(llvm::ErrorOr<std::unique_ptr<llvm::vfs::File> >, const llvm::Twine&)’:
/home/buildbot/as-builder-4/llvm-clang-x86_64-expensive-checks-ubuntu/llvm-project/llvm/lib/Support/VirtualFileSystem.cpp:2084:10: error: could not convert ‘F’ from ‘std::unique_ptr<llvm::vfs::File>’ to ‘llvm::ErrorOr<std::unique_ptr<llvm::vfs::File> >’
return F;
^
```
Differential Revision: https://reviews.llvm.org/D113832
c17d9b4b12 added REQUIRES lines to a lot of Arm and AArch64
test, but added them to the very beginning, before the existing
update_cc_test_checks lines. This just moves them later so as to not
mess up the existing ordering when the checks are regenerated.
```
/work/omp-vega20-0/openmp-offload-amdgpu-runtime/llvm.src/llvm/lib/Support/VirtualFileSystem.cpp: In static member function 'static llvm::ErrorOr<std::unique_ptr<llvm::vfs::File> > llvm::vfs::File::getWithPath(llvm::ErrorOr<std::unique_ptr<llvm::vfs::File> >, const llvm::Twine&)':
/work/omp-vega20-0/openmp-offload-amdgpu-runtime/llvm.src/llvm/lib/Support/VirtualFileSystem.cpp:2084:10: error: could not convert 'F' from 'std::unique_ptr<llvm::vfs::File>' to 'llvm::ErrorOr<std::unique_ptr<llvm::vfs::File> >'
return F;
^
```
This reverts commit c972175649.
This is a follow up to 0be9ca7c0f to make
paths in the case of falling back to the external file system use the
original format, preserving relative paths, and allow the external
filesystem to canonicalize them if needed.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D109128
This patch rewrites checks in a few debug info tests to avoid using
'CHECK-NOT: {{DW_TAG|NULL}}'. It proposes `--impicit-check-not=DW_TAG`
instead, as it makes the checks clearer, and easier to analyze and update.
Differential Revision: https://reviews.llvm.org/D113652
For AVX512 targets without VLX, we have to widen 128/256-bit vectors to 512-bits to use some specific AVX512 instructions (or some other instructions with predicates etc.).
I've pulled out the widening code from LowerFunnelShift into the helper function, so we can convert some other widening patterns in the future.
We used to mmap C++ shadow stack as part of the trace region
before ed7f3f5bc9 ("tsan: move shadow stack into ThreadState"),
which moved the shadow stack into TLS. This started causing
timeouts and OOMs on some of our internal tests that repeatedly
create and destroy thousands of threads.
Allocate C++ shadow stack with mmap and small pages again.
This prevents the observed timeouts and OOMs.
But we now need to be more careful with interceptors that
run after thread finalization because FuncEntry/Exit and
TraceAddEvent all need the shadow stack.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D113786
Only search within the requested section, and allow one-past-then-end addresses.
This is needed to support section-end-address references to sections with no
symbols in them.
Change FileError to pass through the error code from the Error it wraps.
This allows APIs that return ECError to transition to FileError without
changing returned std::error_code.
This was extracted from https://reviews.llvm.org/D109345.
Differential Revision: https://reviews.llvm.org/D113225
Fix the const-ness of `iterator_facade_base::operator->` and
`iterator_facade_base::operator[]`. This is a follow-up to
1b651be046, which fixed const-ness of
various iterator adaptors.
Iterators, like the pointers that they generalize, have two types of
`const`.
- The `const` qualifier on members indicates whether the iterator
itself can be changed. This is analagous to `int *const`.
- The `const` qualifier on return values of `operator*()`,
`operator[]()`, and `operator->()` controls whether the the
pointed-to value can be changed. This is analogous to `const int*`.
If an iterator facade returns a handle to its own state, then T (and
PointerT and ReferenceT) should usually be const-qualified. Otherwise,
if clients are expected to modify the state itself, the field can be
declared mutable or a const_cast can be used.
VarStreamArrayIterator returns a reference to a just-computed internal
value. Change it to iterate over `const ValueType` to avoid allowing
clients to mutate the internal state, and to drop the
non-`const`-qualified operator*().
The removed operator*() was from 175d70ee5c to get
iterator_facade_base::operator->() working, and this fixes the root
cause instead: setting `T` to `const ValueType` causes
iterator_facade_base to infer `PointerT` as `const ValueType*`.
Ironically, this is the last blocker for removing the const-incorrect
overload of `iterator_facade_base::operator->()`, whose presence
triggered adding the workaround in the first place :).
Differential Revision: https://reviews.llvm.org/D113797
Switch to using config.root.native_target to determine if tests are
supported. This is a canonical form of the arch from the target
triple.
Reviewed By: lhames, DavidSpickett
Differential Revision: https://reviews.llvm.org/D110788
```
/Users/ksmiley/dev/llvm-project/lld/MachO/Symbols.cpp:43:27: warning: field 'external' will be initialized after field 'weakDefCanBeHidden' [-Wreorder-ctor]
weakDef(isWeakDef), external(isExternal),
^
1 warning generated.
```
Differential Revision: https://reviews.llvm.org/D113823
autohide symbols behaves similarly to private_extern symbols.
However, LD64 allows exporting autohide symbols. LLD currently does not.
This patch allows LLD to export them.
Differential Revision: https://reviews.llvm.org/D113167
This implements the following changes:
* AutoType retains sugared deduced-as-type.
* Template argument deduction machinery analyses the sugared type all the way
down. It would previously lose the sugar on first recursion.
* Undeduced AutoType will be properly canonicalized, including the constraint
template arguments.
* Remove the decltype node created from the decltype(auto) deduction.
As a result, we start seeing sugared types in a lot more test cases,
including some which showed very unfriendly `type-parameter-*-*` types.
Signed-off-by: Matheus Izvekov <mizvekov@gmail.com>
Reviewed By: rsmith
Differential Revision: https://reviews.llvm.org/D110216
(Split from D113167)
Benchmarking on one of our large apps which exports a few thousands symbols,
this showed an improvement of ~17%.
x ./LLD_no_parallel.txt
+ ./LLD_with_parallel.txt
N Min Max Median Avg Stddev
x 10 84.01 89.41 88.64 87.693 1.7424061
+ 10 71.9 74.29 72.63 72.753 0.77734663
Difference at 95.0% confidence
-14.94 +/- 1.26763
-17.0367% +/- 1.44553%
(Student's t, pooled s = 1.34912)
(wallclock)
Differential Revision: https://reviews.llvm.org/D113820
Similar to how the other swift sections are registered by the ORC
runtime's macho platform, add the __swift5_types section, which contains
type metadata. Add a simple test that demonstrates that the swift
runtime recognized the registered types.
rdar://85358530
Differential Revision: https://reviews.llvm.org/D113811
category is empty
Currently, if we create a category in ObjC that is empty, we still emit
runtime metadata for that category. This is a scenario that could
commonly be run into when using __attribute__((objc_direct_members)),
which elides the need for much of the category metadata. This is
slightly wasteful and can be easily skipped by checking the category
metadata contents during CodeGen.
rdar://66177182
Differential Revision: https://reviews.llvm.org/D113455
Since glibc 2.34, dlsym does
1. malloc 1
2. malloc 2
3. free pointer from malloc 1
4. free pointer from malloc 2
These sequence was not handled by trivial dlsym hack.
This fixes https://bugs.llvm.org/show_bug.cgi?id=52278
Reviewed By: eugenis, morehouse
Differential Revision: https://reviews.llvm.org/D112588
This is one of those wonderful "in theory X doesn't matter, but in practice is does" changes. In this particular case, we shift the IVs inserted by the runtime unroller to clamp iteration count of the loops* from decrementing to incrementing.
Why does this matter? A couple of reasons:
* SCEV doesn't have a native subtract node. Instead, all subtracts (A - B) are represented as A + -1 * B and drops any flags invalidated by such. As a result, SCEV is slightly less good at reasoning about edge cases involving decrementing addrecs than incrementing ones. (You can see this in the inferred flags in some of the test cases.)
* Other parts of the optimizer produce incrementing IVs, and they're common in idiomatic source language. We do have support for reversing IVs, but in general if we produce one of each, the pair will persist surprisingly far through the optimizer before being coalesced. (You can see this looking at nearby phis in the test cases.)
Note that if the hardware prefers decrementing (i.e. zero tested) loops, LSR should convert back immediately before codegen.
* Mostly irrelevant detail: The main loop of the prolog case is handled independently and will simple use the original IV with a changed start value. We could in theory use this scheme for all iteration clamping, but that's a larger and more invasive change.
The division by constant optimization often produces constants that
are uimm32, but not simm32. These constants require 3 or 4 instructions
to materialize without Zba.
Since these instructions are often used by a multiply with a LHS
that needs to be zero extended with an AND, we can switch the MUL
to a MULHU by shifting both inputs left by 32. Once we shift the
constant left, the upper 32 bits no longer need to be 0 so constant
materialization is free to use LUI+ADDIW. This reduces the constant
materialization from 4 instructions to 3 in some cases while also
reducing the zero extend of the LHS from 2 shifts to 1.
Differential Revision: https://reviews.llvm.org/D113805
Fix some confusion between the two types of `const` a pointer/iterator
can have. Users of a SwitchInst::CaseIterator should not (and do not!)
manually mutate the SwitchInst::CaseHandle that tracks its internal
state. Change operator*() to return `const CaseHandle&`, remove the
non-const-qualified operator*(), and const-qualify
CaseHandle::setValue() and CaseHandle::setSuccessor().
Differential Revision: https://reviews.llvm.org/D113788
Take advantage of class name injection to avoid redundantly specifying
template parameters of iterator adaptor/facade base classes.
No functionality change, although the private typedefs changed in a
couple of cases.
- Added a private typedef HashTableIterator::BaseT, following the
pattern from r207084 / 3478d4b164, to
pre-emptively appease MSVC (maybe it's not necessary anymore but
looks like we do this pretty consistently). Otherwise, I removed
private
- Removed private typedefs filter_iterator_impl::BaseT and
FilterIteratorTest::InputIterator::BaseT since there was only one
use of each and the definition was no longer interesting.