Committing on behalf of thejh (Jann Horn).
Given an attribute((noderef)) pointer "p" to the struct
struct s { int a[2]; };
ensure that the following expressions are treated the same way by the
noderef logic:
p->a
(*p).a
Until now, the first expression would be treated correctly (nothing is
added to PossibleDerefs because CheckMemberAccessOfNoDeref() bails out
on array members), but the second expression would incorrectly warn
because "*p" creates a PossibleDerefs entry.
Handle this case the same way as for the AddrOf operator.
Differential Revision: https://reviews.llvm.org/D92140
This attributes specifies how (or if) a given function or method will be
imported into a swift async method. rdar://70111252
Differential revision: https://reviews.llvm.org/D92742
_Nullable_result generally like _Nullable, except when being imported into a
swift async method. rdar://70106409
Differential revision: https://reviews.llvm.org/D92495
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context. Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
Such fields will likely have offset zero making
__sanitizer_dtor_callback poisoning wrong regions.
E.g. it can poison base class member from derived class constructor.
Differential Revision: https://reviews.llvm.org/D92727
Introduce a function that creates a statically-scheduled workshare loop
out of a canonical loop created earlier by the OpenMPIRBuilder. This
basically amounts to injecting runtime calls to the preheader and the
after block and updating the trip count. Static scheduling kind is
currently hardcoded and needs to be extracted from the runtime library
into common TableGen definitions.
Differential Revision: https://reviews.llvm.org/D92476
ScalarEvolution::getSCEV cannot be used during codegen. ScalarEvolution
assumes a stable IR and control flow which is under construction during
Polly's CodeGen. In particular, it uses DominatorTree for compute the
backedge taken count. However the DominatorTree is not updated during
codegen.
In this case, SCEV was used to determine the base pointer of an array
access. Replace it by our own function. Polly generates only GEP and
BitCasts for array acceses, i.e. it is sufficient to handle these to to
find the base pointer.
Fixes llvm.org/PR48422
This attribute permits a typedef to be associated with a class template
specialization as a preferred way of naming that class template
specialization. This permits us to specify that (for example) the
preferred way to express 'std::basic_string<char>' is as 'std::string'.
The attribute is applied to the various class templates in libc++ that have
corresponding well-known typedef names.
Differential Revision: https://reviews.llvm.org/D91311
Added a trivial destructor in release mode and in debug mode a destructor that asserts RefCount is indeed zero.
This ensure people aren't manually (maybe accidentally) destroying these objects like in this contrived example.
```lang=c++
{
std::unique_ptr<SomethingRefCounted> Object;
holdIntrusiveOwnership(Object.get());
// Object Destructor called here will assert.
}
```
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D92480
Allow sections to be placed into COMDAT groups, in addtion to functions and data
segments.
Also make section symbols unnamed, which allows sections with identical names
(section names are independent of their section symbols, but previously we
gave the symbols the same name as their sections, which results in collisions
when sections are identically-named).
Differential Revision: https://reviews.llvm.org/D92691
After bufferization, the backend has much more trouble hoisting loop invariant
loads from the loops generated by the sparse compiler. Therefore, this is done
during sparse code generation. Note that we don't bother hoisting derived
invariant expressions on SSA values, since the backend does that very well.
Still TBD: scalarize reductions to avoid load-add-store cycles
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D92534
As reported in PR48177, the type-deduction extraction ends up going into
an infinite loop when the type referred to has a recursive definition.
This stops recursing and just substitutes the type-source-info the
TypeLocBuilder identified when transforming the base.
When we annotating a function header so that it could be used by other
TU, we also need to make sure the function is parsed correctly within
the same TU. So if we can find the function's implementation,
ignore the annotations, otherwise, false positive would occur.
Move the escape by value case to post call and do not escape the handle
if the function is inlined and we have analyzed the handle.
Differential Revision: https://reviews.llvm.org/D91902
The issue didn't change the behaviour which is tested in libcxx/test/std/input.output/filesystems/class.path/path.member/path.concat.pass.cpp.
The change to use string_view instead of string is not strictly necessary.
<filesystem> was added in commit 998a5c8831 (Implement <filesystem>).
Reviewed By: #libc, ldionne
Differential Revision: https://reviews.llvm.org/D92731
This patch adds the ConstraintElimination pass to the LTO pipeline and
also runs it after SCCP in the function simplification pipeline.
This increases the number of cases we can elimination. Pending further
tuning.
Emit error for use of 128-bit integer inside device code had been
already implemented in https://reviews.llvm.org/D74387. However,
the error is not emitted for SPIR64, because for SPIR64, hasInt128Type
return true.
hasInt128Type: is also used to control generation of certain 128-bit
predefined macros, initializer predefined 128-bit integer types and
build 128-bit ArithmeticTypes. Except predefined macros, only the
device target is considered, since error only emit when 128-bit
integer is used inside device code, the host target (auxtarget) also
needs to be considered.
The change address:
1. (SPIR.h) Correct hasInt128Type() for SPIR targets.
2. Sema.cpp and SemaOverload.cpp: Add additional check to consider host
target(auxtarget) when call to hasInt128Type. So that __int128_t
and __int128() are allowed to avoid error when they used outside
device code.
3. SemaType.cpp: add check for SYCLIsDevice to delay the error message.
The error will be emitted if the use of 128-bit integer in the device
code.
Reviewed By: Johannes Doerfert and Aaron Ballman
Differential Revision: https://reviews.llvm.org/D92439
The xxeval instruction was intorduced in Power PC in Power 10.
The instruction accepts three vector registers and an immediate.
Depending on the value of the immediate the instruction can be used
to perform certain bitwise boolean operations (and, or, xor, ...) on
the given vector registers.
This patch implements the AND and NAND patterns that can be used by
the instruction.
Reviewed By: nemanjai, #powerpc, bsaleil, NeHuang, jsji
Differential Revision: https://reviews.llvm.org/D92420
I have a patch that adds another group of candidate types to
BuiltinCandidateTypeSet. Currently two styles are in use: the older
begin/end pairs and the newer iterator_range approach. I think the
group of candidates that I want to add should use iterator ranges,
but I'd also like to consolidate the handling of the new candidates
with some existing code that uses begin/end pairs. This patch therefore
converts the begin/end pairs to iterator ranges as a first step.
No functional change intended.
Differential Revision: https://reviews.llvm.org/D92222
A rotate by half the bitwidth swaps the bottom and top half which is the same as one of the MSB GREVI stage.
We have to do this as a special combine because we prefer to keep (rotl/rotr X, BitWidth/2) as a rotate rather than a single stage GREVI.
Differential Revision: https://reviews.llvm.org/D92286
Add support to normalize affine.for ops i.e., convert the lower bound to zero
and loop step to one. The Upper bound is set to the trip count of the loop.
The exact value of loopIV is calculated just inside the body of affine.for.
Currently loops with lower bounds having single result are supported. No such
restriction exists on upper bounds.
Differential Revision: https://reviews.llvm.org/D92233
Check pointer returned by strchr, as it can be NULL in case of broken
format of input string. Introduced new function __kmp_str_loc_numbers
for fast parsing of numbers only in the location string.
Also made some cleanup of __kmp_str_loc_init declaration and usage:
- changed type of init_fname parameter to bool;
- changed input from true to false in places where fname is not used.
Differential Revision: https://reviews.llvm.org/D90962
It is possible to merge reuse and reorder shuffles and reduce the total
cost of the ivectorization tree/number of final instructions.
Differential Revision: https://reviews.llvm.org/D92668
This adds code to revert low overhead loops with calls in them before
register allocation. Ideally we would not create low overhead loops with
calls in them to begin with, but that can be difficult to always get
correct. If we want to try and glue together t2LoopDec and t2LoopEnd
into a single instruction, we need to ensure that no instructions use LR
in the loop. (Technically the final code can be better too, as it
doesn't need to use the same registers but that has not been optimized
for here, as reverting loops with calls is expected to be very rare).
It also adds a MVETailPredUtils.h header to share the revert code
between different passes, and provides a place to expand upon, with
RevertLoopWithCall becoming a place to perform other low overhead loop
alterations like removing copies or combining LoopDec and End into a
single instruction.
Differential Revision: https://reviews.llvm.org/D91273
This patch changes the archive handling to enable the semantics needed
for legacy FORTRAN common blocks and block data. When we have a COMMON
definition of a symbol and are including an archive, LLD will now
search the members for global/weak defintions to override the COMMON
symbol. The previous LLD behavior (where a member would only be included
if it satisifed some other needed symbol definition) can be re-enabled with the
option '-no-fortran-common'.
Differential Revision: https://reviews.llvm.org/D86142
Speeds up linking Chromium's base_unittests almost 10%. According to ministat:
N Min Max Median Avg Stddev
x 5 0.72193289 0.73073196 0.72560811 0.72565799 0.0032265649
+ 5 0.64069581 0.67173195 0.65876389 0.65796089 0.011349451
Difference at 95.0% confidence
-0.0676971 +/- 0.0121682
-9.32906% +/- 1.67685%
(Student's t, pooled s = 0.00834328)
Differential Revision: https://reviews.llvm.org/D92734