This patch reports number of counts being dropped when a hash-mismatch
happens. This information will be helpful to the users -- if the dropped
counts are large, the user should redo the instrumentation build and
recollect the profile.
Differential Revision: https://reviews.llvm.org/D129001
Don't cross reference CSFDO profile and non-CSFDO profile when
checking the function hash. Only return hash_mismatch when
CS bits match, and return unknown_function otherwise.
Differential Revision: https://reviews.llvm.org/D129000
It's more natural to use uint8_t * (std::byte needs C++17 and llvm has
too much uint8_t *) and most callers use uint8_t * instead of char *.
The functions are recently moved into `llvm::compression::zlib::`, so
downstream projects need to make adaption anyway.
(Reapply after revert in e9ce1a5880 due to
Fuchsia test failures. Removed changes in lib/ExecutionEngine/ other
than error categories, to be checked in more detail and reapplied
separately.)
Bulk remove many of the more trivial uses of ManagedStatic in the llvm
directory, either by defining a new getter function or, in many cases,
moving the static variable directly into the only function that uses it.
Differential Revision: https://reviews.llvm.org/D129120
Bulk remove many of the more trivial uses of ManagedStatic in the llvm
directory, either by defining a new getter function or, in many cases,
moving the static variable directly into the only function that uses it.
Differential Revision: https://reviews.llvm.org/D129120
* Refactor compression namespaces across the project, making way for a possible
introduction of alternatives to zlib compression.
Changes are as follows:
* Relocate the `llvm::zlib` namespace to `llvm::compression::zlib`.
Reviewed By: MaskRay, leonardchan, phosek
Differential Revision: https://reviews.llvm.org/D128953
We already remove dots from collected paths and path mappings. This
makes it difficult to match paths inside the profile which contain
dots. For example, we would never match /path/to/../file.c because
the collected path is always be normalized to /path/file.c. This
change enables dot removal for paths inside the profile to address
the issue.
Differential Revision: https://reviews.llvm.org/D123164
We already remove dots from collected paths and path mappings. This
makes it difficult to match paths inside the profile which contain
dots. For example, we would never match /path/to/../file.c because
the collected path is always be normalized to /path/file.c. This
change enables dot removal for paths inside the profile to address
the issue.
Differential Revision: https://reviews.llvm.org/D122750
Add a check to detect that the profiled binary was build with position
independent code. Add a test with a pie binary to which can be reused
later when support is added. Also clean up the error messages with
trailing colons.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D128564
Some cl::ZeroOrMore were added to avoid the `may only occur zero or one times!`
error. More were added due to cargo cult. Since the error has been removed,
cl::ZeroOrMore is unneeded.
Also remove cl::init(false) while touching the lines.
This change prints out the segment information in the raw profile in
YAML format for testing. Since we don't capture build ids yet, we print
out <None> for now.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D126840
Update the YAML format print out of the profile to include a summary
instead of displaying the headers in the raw file buffer. This allows us
to release the raw buffer early saving memory.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D126834
C++ generated code with huge amount of switch cases chokes badly while emitting
coverage mapping, in our specific testcase (~72k cases), it won't stop after hours.
After this change, the frontend job now finishes in 4.5s and shrinks down `@__covrec_`
by 288k when compared to disabling simplification altogether.
There's probably no good way to create a testcase for this, but it's easy to
reproduce, just add thousands of cases in the below switch, and build with
`-fprofile-instr-generate -fcoverage-mapping`.
```
enum type : int {
FEATURE_INVALID = 0,
FEATURE_A = 1,
...
};
const char *to_string(type e) {
switch (e) {
case type::FEATURE_INVALID: return "FEATURE_INVALID";
case type::FEATURE_A: return "FEATURE_A";}
...
}
```
Differential Revision: https://reviews.llvm.org/D126345
Extend the Frame struct to hold the symbol name if requested
when a RawMemProfReader object is constructed. This change updates the
tests and removes the need to pass --debug to obtain the mapping from
GUID to symbol names.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D126344
When a flat CS profile is converted to a nested profile, the call target samples for inlined callee contexts are left over in the callsite target map. This could cause indirect call promotion to function improperly. One issue is that the inlined callsites are treated with double amount of samples. The other is the inlined callsites are reconsidered for subsequent PGO ICP.
I'm fixing this by excluding call targets from the callsite for inlined targets. While fixing this I found that callsite target sum and the number of body samples for that callsite could be mismatched. {D122609} has an explanation and a fix for that on llvm-profgen side. For now I'm tolerating it in this change.
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D125266
Recent experiments with our two large internal services showed that duplicating context profiles into base profile caused code size inflation and didn't deliver good performance compared to no such duplication. It was a trick we made to catch up with the CS flat profile and I'm now turning it off by default.
The code size inflation mainly comes from the enriched based profiles. A base profile for a function represents the uninlined (or outlined) portion of the whole function running time. Such portion could be very small if a function is inlined into most of its hot callsites. Duplicating context profiles of the function into its base profiles could cause the outlined body to be hot enough and in turn get many of its callees inlined, thus increases the code size. The size inflation could further cause perf regression.
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D124796
Switch the error type when a function is not found in the memprof
profile to unknown_function. This gives compatibility with normal PGO
function matching, and also prevents issuing large numbers of additional
matching errors since pgo-warn-missing-function is off by default.
Differential Revision: https://reviews.llvm.org/D124953
To be more clear and definitive, I'm renaming `ProfileIsCSFlat` back to `ProfileIsCS` which stands for full context-sensitive flat profiles. `ProfileIsCSNested` is now renamed to `ProfileIsPreInlined` and is extended to be applicable for CS flat profiles too. More specifically, `ProfileIsPreInlined` is for any kind of profiles (flat or nested) that contain 'ShouldBeInlined' contexts. The flag is encoded in the profile summary section for extbinary profiles and is computed on-the-fly for text profiles.
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D122602
The current implementation of memprof information in the indexed profile
format stores the representation of each calling context fram inline.
This patch uses an interned representation where the frame contents are
stored in a separate on-disk hash table. The table is indexed via a hash
of the contents of the frame. With this patch, the compressed size of a
large memprof profile reduces by ~22%.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D123094
To ease profile annotation, each of the callsites in a function can be
annotated with profile data - "IR metadata format for MemProf" [1]. This
patch extends the on-disk serialized record format to store the debug
information for allocation callsites incl inline frames. This change is
incompatible with the existing format i.e. indexed profiles must be
regenerated, raw profiles are unaffected.
[1] https://groups.google.com/g/llvm-dev/c/aWHsdMxKAfE/m/WtEmRqyhAgAJ
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D121179
Since DI frames are enumerated with the leaf function at index 0, this
patch fixes the logic when IsInlineFrame is set. Also update the
unittests to check that only the last frame is marked as non-inline from
a set of DI Frames for a PC address.
Differential Revision: https://reviews.llvm.org/D121830
We add to ensure that we are observing the correct callstack order in memprof
during symbolization. There was some confusion whether the order of
DIFrame objects were reversed but in reality the leaf function is at
index 0 so no code changes are required.
Differential Revision: https://reviews.llvm.org/D121759
Template functions share the same lines in source files, so the common
container of lines' properties cannot be used to calculate the coverage
statistics of individual functions.
> cat tmpl.cpp
template <int N> int test() { return N; }
int main() { return test<1>() + test<2>(); }
> clang++ --coverage tmpl.cpp -o tmpl
> ./tmpl
> llvm-cov gcov tmpl.cpp -f
...
Function '_Z4testILi1EEiv'
Lines executed:100.00% of 1
Function '_Z4testILi2EEiv'
Lines executed:-nan% of 0
...
> llvm-cov-patched gcov tmpl.cpp -f
...
Function '_Z4testILi1EEiv'
Lines executed:100.00% of 1
Function '_Z4testILi2EEiv'
Lines executed:100.00% of 1
...
Differential Revision: https://reviews.llvm.org/D121390
With a sufficiently large output buffer, the only failure is Z_MEM_ERROR.
Check it and call the noreturn report_bad_alloc_error if applicable.
resize_for_overwrite may call report_bad_alloc_error as well.
Now that there is no other error type, we can replace the return type with void
and simplify call sites.
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D121512
This patch filters out callstack frames which can't be symbolized or if
the frames belong to the runtime. Symbolization may not be possible if
debug information is unavailable or if the addresses are from a shared
library. For now we only support optimization of the main binary which
is statically linked to the compiler runtime.
Differential Revision: https://reviews.llvm.org/D120860
Currently, symbolization of stack frames occurs on demand when the instrprof writer
iterates over all the records in the raw memprof reader. With this
change we symbolize and cache the frames immediately after reading the
raw profiles. For a large internal binary this results in a runtime
reduction of ~50% (2m -> 48s) when merging a memprof raw profile with a
raw instr profile to generate an indexed profile. This change also makes
it simpler in the future to generate additional calling context
metadata to attach to each memprof record.
Differential Revision: https://reviews.llvm.org/D120430
Based on the discussion in D115393, I've updated the names to be more
descriptive.
Reviewed By: ellis, MaskRay
Differential Revision: https://reviews.llvm.org/D120092
This patch adds support for optional memory profile information to be
included with and indexed profile. The indexed profile header adds a new
field which points to the offset of the memory profile section (if
present) in the indexed profile. For users who do not utilize this
feature the only overhead is a 64-bit offset in the header.
The memory profile section contains (1) profile metadata describing the
information recorded for each entry (2) an on-disk hashtable containing
the profile records indexed via llvm::md5(function_name). We chose to
introduce a separate hash table instead of the existing one since the
indexing for the instrumented fdo hash table is based on a CFG hash
which itself is perturbed by memprof instrumentation.
This commit also includes the changes reviewed separately in D120093.
Differential Revision: https://reviews.llvm.org/D120103
We write the memprof internal call frame data in little endian format.
However when reading the frame information we were casting it directly
to a MemProfRecord::Frame pointer. In this change we add a separate
deserialization method which uses an endian reader to read the bytes as
little endian.
This fixes https://lab.llvm.org/buildbot/#/builders/100/builds/12940
Differential Revision: https://reviews.llvm.org/D120093
This reverts commit 85355a560a.
This patch adds support for optional memory profile information to be
included with and indexed profile. The indexed profile header adds a new
field which points to the offset of the memory profile section (if
present) in the indexed profile. For users who do not utilize this
feature the only overhead is a 64-bit offset in the header.
The memory profile section contains (1) profile metadata describing the
information recorded for each entry (2) an on-disk hashtable containing
the profile records indexed via llvm::md5(function_name). We chose to
introduce a separate hash table instead of the existing one since the
indexing for the instrumented fdo hash table is based on a CFG hash
which itself is perturbed by memprof instrumentation.
Differential Revision: https://reviews.llvm.org/D118653
This reverts commit 9fd2cb21fb.
Fixes an issue on big endian systems where the format version
was not converted to little endian prior to passing to GET_VERSION.
Differential Revision: https://reviews.llvm.org/D118390
As usual with that header cleanup series, some implicit dependencies now need to
be explicit:
llvm/DebugInfo/DWARF/DWARFContext.h no longer includes:
- "llvm/DebugInfo/DWARF/DWARFAcceleratorTable.h"
- "llvm/DebugInfo/DWARF/DWARFCompileUnit.h"
- "llvm/DebugInfo/DWARF/DWARFDebugAbbrev.h"
- "llvm/DebugInfo/DWARF/DWARFDebugAranges.h"
- "llvm/DebugInfo/DWARF/DWARFDebugFrame.h"
- "llvm/DebugInfo/DWARF/DWARFDebugLoc.h"
- "llvm/DebugInfo/DWARF/DWARFDebugMacro.h"
- "llvm/DebugInfo/DWARF/DWARFGdbIndex.h"
- "llvm/DebugInfo/DWARF/DWARFSection.h"
- "llvm/DebugInfo/DWARF/DWARFTypeUnit.h"
- "llvm/DebugInfo/DWARF/DWARFUnitIndex.h"
Plus llvm/Support/Errc.h not included by a bunch of llvm/DebugInfo/DWARF/DWARF*.h files
Preprocessed lines to build llvm on my setup:
after: 1065629059
before: 1066621848
Which is a great diff!
Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D119723
This reverts commit e6999040f5.
Update test to fix signed int comparison warning, fix whitespace in
compiler-rt MIBEntryDef.inc file.
Differential Revision: https://reviews.llvm.org/D117256
Do not merge a context that is already duplicated into the base profile.
Also fixing a typo caused by previous refactoring.
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D119735