Profiling revealed that the majority of lld's execution time on Windows was
spent opening and mapping input files. We can reduce this cost significantly
by performing these operations asynchronously.
This change introduces a queue for all operations on input file data. When
we discover that we need to load a file (for example, when we find a lazy
archive for an undefined symbol, or when we read a linker directive to
load a file from disk), the file operation is launched using a future and
the symbol resolution operation is enqueued. This implies another change
to symbol resolution semantics, but it seems to be harmless ("ninja All"
in Chromium still succeeds).
To measure the perf impact of this change I linked Chromium's chrome_child.dll
with both thin and fat archives.
Thin archives:
Before (median of 5 runs): 19.50s
After: 10.93s
Fat archives:
Before: 12.00s
After: 9.90s
On Linux I found that doing this asynchronously had a negative effect on
performance, probably because the cost of mapping a file is small enough that
it becomes outweighed by the cost of managing the futures. So on non-Windows
platforms I use the deferred execution strategy.
Differential Revision: https://reviews.llvm.org/D27768
llvm-svn: 289760
Inserting a new key into a DenseMap potentially invalidates iterators into that
map. Trying to fix an issue from r289755 triggering this assertion:
Assertion `isHandleInSync() && "invalid iterator access!"' failed.
llvm-svn: 289757
After r289755, the AssumptionCache is no longer needed. Variables affected by
assumptions are now found by using the new operand-bundle-based scheme. This
new scheme is more computationally efficient, and also we need much less
code...
llvm-svn: 289756
There was an efficiency problem with how we processed @llvm.assume in
ValueTracking (and other places). The AssumptionCache tracked all of the
assumptions in a given function. In order to find assumptions relevant to
computing known bits, etc. we searched every assumption in the function. For
ValueTracking, that means that we did O(#assumes * #values) work in InstCombine
and other passes (with a constant factor that can be quite large because we'd
repeat this search at every level of recursion of the analysis).
Several of us discussed this situation at the last developers' meeting, and
this implements the discussed solution: Make the values that an assume might
affect operands of the assume itself. To avoid exposing this detail to
frontends and passes that need not worry about it, I've used the new
operand-bundle feature to add these extra call "operands" in a way that does
not affect the intrinsic's signature. I think this solution is relatively
clean. InstCombine adds these extra operands based on what ValueTracking, LVI,
etc. will need and then those passes need only search the users of the values
under consideration. This should fix the computational-complexity problem.
At this point, no passes depend on the AssumptionCache, and so I'll remove
that as a follow-up change.
Differential Revision: https://reviews.llvm.org/D27259
llvm-svn: 289755
constructs that can do so into the initialization code. This fixes a number
of different cases in which we used to fail to check for abstract types.
Thanks to Tim Shen for inspiring the weird code that uncovered this!
llvm-svn: 289753
BackendUtil.cpp uses llvm::SmallSet but did not include the header. It was
included indirectly, but this will change once the AssumptionCache is removed.
NFC.
llvm-svn: 289752
`SC` didn't make much sense. We don't seem to have a clear convention,
but `IS` sounds good here because it emphasizes that it is an input
section (this is one place in the code where we are dealing with both
input sections and output sections at the same time so that extra
emphasis makes it a bit clearer).
llvm-svn: 289748
Most of the PowerPC64 code generation already creates PIC access. This
changes to a full PIC default, similar to what GCC is doing.
Overall, a monolithic clang binary shrinks by 600KB (about 1%). This can
be a slight regression for TLS access and will use the TOC more
aggressively instead of synthesizing immediates. It is expected to be
performance neutral.
Differential Revision: https://reviews.llvm.org/D26564
llvm-svn: 289744
Most of the PowerPC64 code generation for the ELF ABI is already PIC.
There are four main exceptions:
(1) Constant pointer arrays etc. should in writeable sections.
(2) The TOC restoration NOP after a call is needed for all global
symbols. While GNU ld has a workaround for questionable GCC self-calls,
we trigger the checks for calls from COMDAT sections as they cross input
sections and are therefore not considered self-calls. The current
decision is questionable and suboptimal, but outside the scope of the
change.
(3) TLS access can not use the initial-exec model.
(4) Jump tables should use relative addresses. Note that the current
encoding doesn't work for the large code model, but it is more compact
than the default for any non-trivial jump table. Improving this is again
beyond the scope of this change.
At least (1) and (3) are assumptions made in target-independent code and
introducing additional hooks is a bit messy. Testing with clang shows
that a -fPIC binary is 600KB smaller than the corresponding -fno-pic
build. Separate testing from improved jump table encodings would explain
only about 100KB or so. The rest is expected to be a result of more
aggressive immediate forming for -fno-pic, where the -fPIC binary just
uses TOC entries.
This change brings the LLVM output in line with the GCC output, other
PPC64 compilers like XLC on AIX are known to produce PIC by default
as well. The relocation model can still be provided explicitly, i.e.
when using MCJIT.
One test case for case (1) is included, other test cases with relocation
mode sensitive behavior are wired to static for now. They will be
reviewed and adjusted separately.
Differential Revision: https://reviews.llvm.org/D26566
llvm-svn: 289743
I've chosen to remove NVPTXInstrInfo::CanTailMerge but not
NVPTXInstrInfo::isLoadInstr and isStoreInstr (which are also dead)
because while the latter two are reasonably useful utilities, the former
cannot be used safely: It relies on successful address space inference
to identify writes to shared memory, but addrspace inference is a
best-effort thing.
llvm-svn: 289740
The original motivation for this patch comes from wanting to canonicalize
more IR to selects and also canonicalizing min/max.
If we're going to do that, we need more backend fixups to undo select codegen
when simpler ops will do. I chose AArch64 for the tests because that shows the
difference in the simplest way. This should fix:
https://llvm.org/bugs/show_bug.cgi?id=31175
Differential Revision: https://reviews.llvm.org/D27489
llvm-svn: 289738
In list::remove we collect the nodes we're removing in a seperate
list instance. However we construct this list using the default
constructor which default constructs the allocator. However allocators
are not required to be default constructible. This patch fixes the
construction of the second list.
llvm-svn: 289735
test/std/containers/container.adaptors/queue/queue.cons.alloc/ctor_container_alloc.pass.cpp
test/std/containers/container.adaptors/stack/stack.cons.alloc/ctor_container_alloc.pass.cpp
Iterate with C::size_type because that's what operator[] takes.
test/std/containers/sequences/vector/contiguous.pass.cpp
test/std/strings/basic.string/string.require/contiguous.pass.cpp
Add static_cast<typename C::difference_type> because that's what the iterator's operator+ takes.
Fixes D27777.
llvm-svn: 289734
When getting attributes it is sometimes nicer to use Optional<T> some of the time instead of magic values. I tried to cut over to only using the Optional values but it made many of the call sites very messy, so it makes sense the leave in the calls that can return a default value. Otherwise code that looks like this:
uint64_t CallColumn = Die.getAttributeValueAsAddress(DW_AT_call_line, 0);
Has to be turned into:
uint64_t CallColumn = 0;
if (auto CallColumnValue = Die.getAttributeValueAsAddress(DW_AT_call_line))
CallColumn = *CallColumnValue;
The first snippet of code looks much better. But in cases where you want an offset that may or may not be there, the following code looks better:
if (auto StmtOffset = Die.getAttributeValueAsSectionOffset(DW_AT_stmt_list)) {
// Use StmtOffset
}
Differential Revision: https://reviews.llvm.org/D27772
llvm-svn: 289731
Summary:
Previously they were defined as a 2D char array in a header file. This
is kind of overkill -- we can let the linker lay out these strings
however it pleases. While we're at it, we might as well just inline
these constants where they're used, as each of them is used only once.
Also move NVPTXUtilities.{h,cpp} into namespace llvm.
Reviewers: tra
Subscribers: jholewinski, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D27636
llvm-svn: 289728
Summary:
The standard requires tuple have the following constructors:
```
tuple(tuple<OtherTypes...> const&);
tuple(tuple<OtherTypes...> &&);
tuple(pair<T1, T2> const&);
tuple(pair<T1, T2> &&);
tuple(array<T, N> const&);
tuple(array<T, N> &&);
```
However libc++ implements these as a single constructor with the signature:
```
template <class TupleLike, enable_if_t<__is_tuple_like<TupleLike>::value>>
tuple(TupleLike&&);
```
This causes the constructor to reject types derived from tuple-like types; Unlike if we had all of the concrete overloads, because they cause the derived->base conversion in the signature.
This patch fixes this issue by detecting derived types and the tuple-like base they are derived from. It does this by creating an overloaded function with signatures for each of tuple/pair/array and checking if the possibly derived type can convert to any of them.
This patch fixes [PR17550]( https://llvm.org/bugs/show_bug.cgi?id=17550)
This patch
Reviewers: mclow.lists, K-ballo, mpark, EricWF
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D27606
llvm-svn: 289727
Summary: SampleProfileLoader pass may be invoked twice by LTO. The 2nd pass should not append more summary info as it is already preset by the 1st pass.
Reviewers: eraman, davidxl
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D27733
llvm-svn: 289725
Also, udpate the ~60 failing tests in the tree which did
not contain a valid datalayout.
This fixes PR31123. lld will be updated in a following patch,
immediately after this is committed.
Differential Revision: https://reviews.llvm.org/D27082
llvm-svn: 289719
Summary:
We used to create SampleProfileLoader pass in clang. This makes LTO/ThinLTO unable to add this pass in the linker plugin. This patch moves the SampleProfileLoader pass creation from
clang to llvm pass manager builder.
Reviewers: tejohnson, davidxl, dnovillo
Subscribers: mehdi_amini, cfe-commits
Differential Revision: https://reviews.llvm.org/D27744
llvm-svn: 289715
Summary: We used to create SampleProfileLoader pass in clang. This makes LTO/ThinLTO unable to add this pass in the linker plugin. This patch moves the SampleProfileLoader pass creation from clang to llvm pass manager builder.
Reviewers: tejohnson, davidxl, dnovillo
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D27743
llvm-svn: 289714
LLDB needs some minor changes to adopt PrettyStackTrace after https://reviews.llvm.org/D27683.
We remove our own SetCrashDescription() function and use LLVM-provided RAII objects instead.
We also make sure LLDB doesn't define __crashtracer_info__ which would collide with LLVM's definition.
Differential Revision: https://reviews.llvm.org/D27735
llvm-svn: 289711