Conceptually, the implementation is pretty straightforward: we put each
literal value into a hashtable, and then write out the keys of that
hashtable at the end.
In contrast with ELF, the Mach-O format does not support variable-length
literals that aren't strings. Its literals are either 4, 8, or 16 bytes
in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since
we don't need to worry about overly-long values, we should be able to do
a faster job by just hashing.
That said, the implementation right now is far from optimal, because we
add to those hashtables serially. To parallelize this, we'll need a
basic concurrent hashtable (only needs to support concurrent writes w/o
interleave reads), which shouldn't be to hard to implement, but I'd like
to punt on it for now.
Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W:
N Min Max Median Avg Stddev
x 20 4.27 4.39 4.315 4.3225 0.033225703
+ 20 4.36 4.82 4.44 4.4845 0.13152846
Difference at 95.0% confidence
0.162 +/- 0.0613971
3.74783% +/- 1.42041%
(Student's t, pooled s = 0.0959262)
This corresponds to binary size savings of 2MB out of 335MB, or 0.6%.
It's not a great tradeoff as-is, but as mentioned our implementation can
be signficantly optimized, and literal dedup will unlock more
opportunities for ICF to identify identical structures that reference
the same literals.
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D103113
Both doInitialize and runOnModule were running the entire analysis
due to the actual work being done in the constructor. Strip it out here
and only get the similarity during runOnModule.
Author: lanza
Reviewers: AndrewLitteken, paquette, plofti
Differential Revision: https://reviews.llvm.org/D92524
Unfortunately the Darwin signpost header also pulls in the system
MachO header and so we need to make sure to use the LLVM versions of
those definitions.
One nice feature of the os_signpost API is that format string
substitutions happen in the consumer, not the logging
application. LLVM's current Signpost class doesn't take advantage of
this though and instead always uses a static "Begin/End %s" format
string.
This patch uses variadic macros to allow the API to be used as
intended. Unfortunately, the primary use-case I had in mind (the
LLDB_SCOPED_TIMER() macro) does not get much better from this, because
__PRETTY_FUNCTION__ is *not* a macro, but a static string, so
signposts created by LLDB_SCOPED_TIMER() still use a static "%s"
format string. At least LLDB_SCOPED_TIMERF() works as intended.
Differential Revision: https://reviews.llvm.org/D103575
Every invocation this was copying the Mapper for no reason. Take a const
ref instead.
Author: lanza
Reviewers: AndrewLitteken, plofti, paquette,
Differential Review: https://reviews.llvm.org/D92532
Both tests are passing for GCC>8 on Linux so let's mark them as passing.
TestCPPAuto was originally disabled due to "an problem with debug info generation"
in ea35dbeff2 .
TestClassTemplateParameterPack was disabled without explanation in
0f01fb39e3 .
The motivation is that the update script has at least two deviations
(`<...>@GOT`/`<...>@PLT`/ and not hiding pointer arithmetics) from
what pretty much all the checklines were generated with,
and most of the tests are still not updated, so each time one of the
non-up-to-date tests is updated to see the effect of the code change,
there is a lot of noise. Instead of having to deal with that each
time, let's just deal with everything at once.
This has been done via:
```
cd llvm-project/llvm/test/CodeGen/X86
grep -rl "; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py" | xargs -L1 <...>/llvm-project/llvm/utils/update_llc_test_checks.py --llc-binary <...>/llvm-project/build/bin/llc
```
Not all tests were regenerated, however.
I don't like landing this change, but it's an acknowledgement of a practical reality. Despite not having well specified semantics for inttoptr and ptrtoint involving non-integral pointer types, they are used in practice. Here's a quick summary of the current pragmatic reality:
* I happen to know that the main external user of non-integral pointers has effectively disabled the verifier rules.
* RS4GC (the lowering pass for abstract GC machine model which is the key motivation for non-integral pointers), even supports them. We just have all the tests using an integral pointer space to let the verifier run.
* Certain idioms (such as alignment checks for alignment N, where any relocation is guaranteed to be N byte aligned) are fine in practice.
* As implemented, inttoptr/ptrtoint are CSEd and are not control dependent. This means that any code which is intending to check a particular bit pattern at site of use must be wrapped in an intrinsic or external function call.
This change allows them in the Verifier, and updates the LangRef to specific them as implementation dependent. This allows us to acknowledge current reality while still leaving ourselves room to punt on figuring out "good" semantics until the future.
This is a similarity visualization tool that accepts a Module and
passes it to the IRSimilarityIdentifier. The resulting SimilarityGroups
are output in a JSON file.
Tests are found in test/tools/llvm-sim and check for the file not found,
a bad module, and that the JSON is created correctly.
Reviewers: paquette, jroelofs, MaskRay
Recommit of: 15645d044b to fix linking
errors.
Differential Revision: https://reviews.llvm.org/D86974
It's possible to have several USE statements for the same module that
have different mixes of rename clauses and ONLY clauses. The presence
of a rename cause has the effect of hiding a previously associated name,
and the presence of an ONLY clause forces the name to be visible even in
the presence of a rename.
I fixed this by keeping track of the names that appear on rename and ONLY
clauses. Then, when processing the USE association of a name, I check to see
if it previously appeared in a rename clause and not in a USE clause. If so, I
remove its USE associated symbol. Also, when USE associating all of the names
in a module, I do not USE associate names that have appeared in rename clauses.
I also added a test.
Differential Revision: https://reviews.llvm.org/D104130
Also:
- add driver test (fsanitize-use-after-return.c)
- add basic IR test (asan-use-after-return.cpp)
- (NFC) cleaned up logic for generating table of __asan_stack_malloc
depending on flag.
for issue: https://github.com/google/sanitizers/issues/1394
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D104076
Currently, the compiler-rt build system checks only whether __X86_64
is defined to determine whether the default compiler-rt target arch
is x86_64. Since x32 defines __X86_64 as well, we must also check that
the default pointer size is eight bytes and not four bytes to properly
detect a 64-bit x86_64 compiler-rt default target arch.
Reviewed By: hvdijk, vitalybuka
Differential Revision: https://reviews.llvm.org/D99988
Check applied to unbounded (incomplete) arrays and pointers to spot
cases where the computed address is beyond the largest possible
addressable extent of the array, based on the address space in which the
array is delcared, or which the pointer refers to.
Check helps to avoid cases of nonsense pointer math and array indexing
which could lead to linker failures or runtime exceptions. Of
particular interest when building for embedded systems with small
address spaces.
This is version 2 of this patch -- version 1 had some testing issues
due to a sign error in existing code. That error is corrected and
lit test for this chagne is extended to verify the fix.
Originally reviewed/accepted by: aaron.ballman
Original revision: https://reviews.llvm.org/D86796
Reviewed By: aaron.ballman, ebevhan
Differential Revision: https://reviews.llvm.org/D88174
## Introduction
This proposal describes the new op to be added to the `std` (and later moved `memref`)
dialect called `alloca_scope`.
## Motivation
Alloca operations are easy to misuse, especially if one relies on it while doing
rewriting/conversion passes. For example let's consider a simple example of two
independent dialects, one defines an op that wants to allocate on-stack and
another defines a construct that corresponds to some form of looping:
```
dialect1.looping_op {
%x = dialect2.stack_allocating_op
}
```
Since the dialects might not know about each other they are going to define a
lowering to std/scf/etc independently:
```
scf.for … {
%x_temp = std.alloca …
… // do some domain-specific work using %x_temp buffer
… // and store the result into %result
%x = %result
}
```
Later on the scf and `std.alloca` is going to be lowered to llvm using a
combination of `llvm.alloca` and unstructured control flow.
At this point the use of `%x_temp` is bound to either be either optimized by
llvm (for example using mem2reg) or in the worst case: perform an independent
stack allocation on each iteration of the loop. While the llvm optimizations are
likely to succeed they are not guaranteed to do so, and they provide
opportunities for surprising issues with unexpected use of stack size.
## Proposal
We propose a new operation that defines a finer-grain allocation scope for the
alloca-allocated memory called `alloca_scope`:
```
alloca_scope {
%x_temp = alloca …
...
}
```
Here the lifetime of `%x_temp` is going to be bound to the narrow annotated
region within `alloca_scope`. Moreover, one can also return values out of the
alloca_scope with an accompanying `alloca_scope.return` op (that behaves
similarly to `scf.yield`):
```
%result = alloca_scope {
%x_temp = alloca …
…
alloca_scope.return %myvalue
}
```
Under the hood the `alloca_scope` is going to lowered to a combination of
`llvm.intr.stacksave` and `llvm.intr.strackrestore` that are going to be invoked
automatically as control-flow enters and leaves the body of the `alloca_scope`.
The key value of the new op is to allow deterministic guaranteed stack use
through an explicit annotation in the code which is finer-grain than the
function-level scope of `AutomaticAllocationScope` interface. `alloca_scope`
can be inserted at arbitrary locations and doesn’t require non-trivial
transformations such as outlining.
## Which dialect
Before memref dialect is split, `alloca_scope` can temporarily reside in `std`
dialect, and later on be moved to `memref` together with the rest of
memory-related operations.
## Implementation
An implementation of the op is available [here](https://reviews.llvm.org/D97768).
Original commits:
* Add initial scaffolding for alloca_scope op
* Add alloca_scope.return op
* Add no region arguments and variadic results
* Add op descriptions
* Add failing test case
* Add another failing test
* Initial implementation of lowering for std.alloca_scope
* Fix backticks
* Fix getSuccessorRegions implementation
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D97768
It was found by chance revealing discrepancy between comment (few lines above),
the condition and how re-ordering of instruction is done inside the if statement
it guards. The condition was always evaluated to true.
Differential Revision: https://reviews.llvm.org/D104064
The synchronization library was marked as disabled on Apple platforms
up to now because we were not 100% sure that it was going to be ABI
stable. However, it's been some time since we shipped it in upstream
libc++ now and there's been no changes so far. This patch enables the
synchronization library on Apple platforms, and hence commits the ABI
stability as far as that vendor is concerned.
Differential Revision: https://reviews.llvm.org/D96790
Allow the usage of minor version 0, for hip versions
such as 4.0. Change the default values when performing
version checks.
Reviewed By: yaxunl
Differential Revision: https://reviews.llvm.org/D104062
If an inferior exits prior to the processing of a disconnect request,
then the threads executing EventThreadFunction and request_discontinue
respectively may call SendTerminatedEvent simultaneously, in turn,
testing and/or setting g_vsc.sent_terminated_event without any
synchronization. In case the thread executing EventThreadFunction sets
it before the thread executing request_discontinue has had a chance to
test it, the latter would move ahead to issue a response to the
disconnect request. Said response may be dispatched ahead of the
terminated event compelling the client to terminate the debug session
without consuming any console output that might've been generated by
the execution of terminateCommands.
Reviewed By: clayborg, wallace
Differential Revision: https://reviews.llvm.org/D103609
This adds implementation information for N2607,
clarifies that C17 only resolved defect reports,
and adds -std= information for the different versions.
Register allocation may spill virtual registers to the stack, which can
increase alignment requirements of the stack frame. If the the function
did not require stack realignment before register allocation, the
registers required to do so may not be reserved/available. This results
in a stack frame that requires realignment but can not be realigned.
Instead, only increase the alignment of the stack if we are still able
to realign.
The register SpillAlignment will be ignored if we can't realign, and the
backend will be responsible for emitting the correct unaligned loads and
stores. This seems to be the assumed behaviour already, e.g.
ARMBaseInstrInfo::storeRegToStackSlot and X86InstrInfo::storeRegToStackSlot
are both `canRealignStack` aware.
Differential Revision: https://reviews.llvm.org/D103602