Adds a fallback to use the debuginfod client library (386655) in `findDebugBinary`.
Fixed a cast of Erorr::success() to Expected<> in debuginfod library.
Added Debuginfod to Symbolize deps in gn.
Adds new symbolizer symbols to `global_symbols.txt`.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D113717
This change adds options to llvm-ifs to allow it to generate multiple
types of stub files at a single invocation.
Differential Revision: https://reviews.llvm.org/D115024
Since total sample and body sample are used to compute hotness threshold in compiler, we found in some services changing the total samples computation will cause noticeable regression. Hence, here we will revert the changes and just keep all total samples number identical to the old tool.
Three changes in this diff:
1. Revert previous diff(https://reviews.llvm.org/D112672: [llvm-profgen] Update total samples by accumulating all its body samples) and put it under a switch.
2. Keep the negative line number. Although compiler doesn't consume the count but it will be used to compute hot threshold.
3. Change to accumulate total samples per byte instead of per instruction.
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D115013
This change allows to trim the profile if it's considered to be cold for baseline AutoFDO. We reuse the cold threshold from `ProfileSummaryBuilder::getColdCountThreshold(..)` which can be set by percent(--profile-summary-cutoff-cold) or by value(--profile-summary-cold-count).
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D113785
This reverts commit 02cc8d698c because it
caused buildbot failures. The issue appears to be simply that we need to
only enable debuginfod when the HTTPClient has been initialized by the
running tool, since InitLLVM does not do the initialization step anymore.
Adds a fallback to use the debuginfod client library (386655) in `findDebugBinary`.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D113717
This patch implements PAC return address signing for armv8-m. This patch roughly
accomplishes the following things:
- PAC and AUT instructions are generated.
- They're part of the stack frame setup, so that shrink-wrapping can move them
inwards to cover only part of a function
- The auth code generated by PAC is saved across subroutine calls so that AUT
can find it again to check
- PAC is emitted before stacking registers (so that the SP it signs is the one
on function entry).
- The new pseudo-register ra_auth_code is mentioned in the DWARF frame data
- With CMSE also in use: PAC is emitted before stacking FPCXTNS, and AUT
validates the corresponding value of SP
- Emit correct unwind information when PAC is replaced by PACBTI
- Handle tail calls correctly
Some notes:
We make the assembler accept the `.save {ra_auth_code}` directive that is
emitted by the compiler when it saves a register that contains a
return address authentication code.
For EHABI we need to have the `FrameSetup` flag on the instruction and
handle the `t2PACBTI` opcode (identically to `t2PAC`), so we can emit
`.save {ra_auth_code}`, instead of `.save {r12}`.
For PACBTI-M, the instruction which computes return address PAC should use SP
value before adjustment for the argument registers save are (used for variadic
functions and when a parameter is is split between stack and register), but at
the same it should be after the instruction that saves FPCXT when compiling a
CMSE entry function.
This patch moves the varargs SP adjustment after the FPCXT save (they are never
enabled at the same time), so in a following patch handling of the `PAC`
instruction can be placed between them.
Epilogue emission code adjusted in a similar manner.
PACBTI-M code generation should not emit any instructions for architectures
v6-m, v8-m.base, and for A- and R-class cores. Diagnostic message for such cases
is handled separately by a future ticket.
note on tail calls:
If the called function has four arguments that occupy registers `r0`-`r3`, the
only option for holding the function pointer itself is `r12`, but this register
is used to keep the PAC during function/prologue epilogue and clobbers the
function pointer.
When we do the tail call we need the five registers (`r0`-`r3` and `r12`) to
keep six values - the four function arguments, the function pointer and the PAC,
which is obviously impossible.
One option would be to authenticate the return address before all callee-saved
registers are restored, so we have a scratch register to temporarily keep the
value of `r12`. The issue with this approach is that it violates a fundamental
invariant that PAC is computed using CFA as a modifier. It would also mean using
separate instructions to pop `lr` and the rest of the callee-saved registers,
which would offset the advantages of doing a tail call.
Instead, this patch disables indirect tail calls when the called function take
four or more arguments and the return address sign and authentication is enabled
for the caller function, conservatively assuming the caller function would spill
LR.
This patch is part of a series that adds support for the PACBTI-M extension of
the Armv8.1-M architecture, as detailed here:
https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/armv8-1-m-pointer-authentication-and-branch-target-identification-extension
The PACBTI-M specification can be found in the Armv8-M Architecture Reference
Manual:
https://developer.arm.com/documentation/ddi0553/latest
The following people contributed to this patch:
- Momchil Velikov
- Ties Stuij
Reviewed By: danielkiss
Differential Revision: https://reviews.llvm.org/D112429
This reverts commit 8cd61aac00.
I had missed one place in a test that needed updating; it passed on my
dirty build tree but not on a clean one.
Original commit message:
D114478 identified testing gaps; this patch fills them.
Differential Revision: https://reviews.llvm.org/D114913
Ignore undefined symbols; other minor code cleanup.
Replace test objects and their asm source with a yaml equivalent.
Differential Revision: https://reviews.llvm.org/D114478
The first 8b of each raw profile section need to be aligned to 8b since
the first item in each section is a u64 count of the number of items in
the section.
Summary of changes:
* Assert alignment when reading counts.
* Update test to check alignment, relax some size checks to allow padding.
* Update raw binary inputs for llvm-profdata tests.
Differential Revision: https://reviews.llvm.org/D114826
In order to support generating profile with FS discriminator, three kind of changes are done in llvm-profgen:
1) Dissassemble .rodata section to check if FS discriminator var ('"__llvm_fs_discriminator__"') exists and set the corresponding flag in the binary.
2) Change the discriminator decoding in `getBaseDiscriminator` and `getDuplicationFactor`.
3) set true for `FunctionSamples::ProfileIsFS` to enable FS functionality in ProfileData.
Reviewed By: xur, hoy, wenlei
Differential Revision: https://reviews.llvm.org/D113296
The memprof profile reader tests rely on binary data which is generated
from and meant to be interpreted on little endian architectures. Add a
REQUIRES: x86_64-linux clause to both tests to ensure they don't fail on big
endian targets such as ppc.
This commit adds initial support to llvm-profdata to read and print
summaries of raw memprof profiles.
Summary of changes:
* Refactor shared defs to MemProfData.inc
* Extend show_main to display memprof profile summaries.
* Add a simple raw memprof profile reader.
* Add a couple of tests to tools/llvm-profdata.
Differential Revision: https://reviews.llvm.org/D114286
AutoFDO performance is sensitive to profile density, i.e., the amount of samples in the profile relative to the program size, because profiles with insufficient samples could be inaccurate due to statistical noise and thus hurt AutoFDO performance. A previous investigation showed that AutoFDO performed better on MySQL with increased amount of samples. Therefore, we implement a profile-density computation feature to give hints about profile density to users and the compiler.
We define the density of a profile Prof as follows:
- For each function A in the profile, density(A) = total_samples(A) / sizeof(A).
- density(Prof) = min(density(A)) for all functions A that are warm (defined below).
A function is considered warm if its total-samples is within top N percent of the profile. For implementation, we reuse the `ProfileSummaryBuilder::getHotCountThreshold(..)` as threshold which can be set by percent(`--profile-summary-cutoff-hot`) or by value(`--profile-summary-hot-count`).
We also introduce `--hot-function-density-threshold` to set hot function density threshold and will give suggestion if profile density is below it which implies we should increase samples.
This also applies for CS profile with all profiles merged into base.
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D113781
These are some final test changes for using instruction referencing on X86:
* Most of these tests just have the flag switched so that they run with
instr-ref, and just work: these tests were fixed by earlier patches.
* There are some spurious differences in textual outputs,
* A few have different temporary labels in the output because more
MCSymbols are printed to the output.
Differential Revision: https://reviews.llvm.org/D114588
The UpdateTestChecks tool itself does not care about which pass
manager that is used in the opt invocation. So the lit tests that
are verifying the behavior of the UpdateTestChecks tool is updated
to use the new-PM syntax (-passes=) when specifying the pass pipeline
in the test cases that are used for verifying the UpdateTestChecks
tool.
Differential Revision: https://reviews.llvm.org/D114517
Renamed the option for llvm-cov and changed variable names to use more
inclusive terms. Also changed the binary for the test.
Reviewed By: alanphipps
Differential Revision: https://reviews.llvm.org/D112816
Introduced/discussed in https://reviews.llvm.org/D38719
The header validation logic was also explicitly building the DWARFUnits
to validate. But then other calls, like "Units.getUnitForOffset" creates
the DWARFUnits again in the DWARFContext proper - so, let's avoid
creating the DWARFUnits twice by walking the DWARFContext's units rather
than building a new list explicitly.
This does reduce some verifier power - it means that any unit with a
header parsing failure won't get further validation, whereas the
verifier-created units were getting some further validation despite
invalid headers. I don't think this is a great loss/seems "right" in
some ways to me that if the header's invalid we should stop there.
Exposing the raw DWARFUnitVectors from DWARFContext feels a bit
sub-optimal, but gave simple access to the getUnitForOffset to keep the
rest of the code fairly similar.
With link-time optimizations enabled, resulting DWARF mayend up containing
cross CU references (through the DW_AT_abstract_origin attribute).
Consider the following example:
// sum.c
__attribute__((always_inline)) int sum(int a, int b)
{
return a + b;
}
// main.c
extern int sum(int, int);
int main()
{
int a = 5, b = 10, c = sum(a, b);
return 0;
}
Compiled as follows:
$ clang -g -flto -fuse-ld=lld main.c sum.c -o main
Results in the following DWARF:
-- sum.c CU: abstract instance tree
...
0x000000b0: DW_TAG_subprogram
DW_AT_name ("sum")
DW_AT_decl_file ("sum.c")
DW_AT_decl_line (1)
DW_AT_prototyped (true)
DW_AT_type (0x000000d3 "int")
DW_AT_external (true)
DW_AT_inline (DW_INL_inlined)
0x000000bc: DW_TAG_formal_parameter
DW_AT_name ("a")
DW_AT_decl_file ("sum.c")
DW_AT_decl_line (1)
DW_AT_type (0x000000d3 "int")
0x000000c7: DW_TAG_formal_parameter
DW_AT_name ("b")
DW_AT_decl_file ("sum.c")
DW_AT_decl_line (1)
DW_AT_type (0x000000d3 "int")
...
-- main.c CU: concrete inlined instance tree
...
0x0000006d: DW_TAG_inlined_subroutine
DW_AT_abstract_origin (0x00000000000000b0 "sum")
DW_AT_low_pc (0x00000000002016ef)
DW_AT_high_pc (0x00000000002016f1)
DW_AT_call_file ("main.c")
DW_AT_call_line (5)
DW_AT_call_column (0x19)
0x00000081: DW_TAG_formal_parameter
DW_AT_location (DW_OP_reg0 RAX)
DW_AT_abstract_origin (0x00000000000000bc "a")
0x00000088: DW_TAG_formal_parameter
DW_AT_location (DW_OP_reg2 RCX)
DW_AT_abstract_origin (0x00000000000000c7 "b")
...
Note that each entry within the concrete inlined instance tree in
the main.c CU has a DW_AT_abstract_origin attribute which
refers to a corresponding entry within the abstract instance
tree in the sum.c CU.
llvm-dwarfdump --statistics did not properly report
DW_TAG_formal_parameters/DW_TAG_variables from concrete inlined
instance trees which had 0% location coverage and which
referred to a different CU, mainly because information about abstract
instance trees and their parameters/variables was stored
locally - just for the currently processed CU,
rather than globally - for all CUs.
In particular, if the concrete inlined instance tree from
the example above was to look like this
(i.e. parameter b has 0% location coverage, hence why it's missing):
0x0000006d: DW_TAG_inlined_subroutine
DW_AT_abstract_origin (0x00000000000000b0 "sum")
DW_AT_low_pc (0x00000000002016ef)
DW_AT_high_pc (0x00000000002016f1)
DW_AT_call_file ("main.c")
DW_AT_call_line (5)
DW_AT_call_column (0x19)
0x00000081: DW_TAG_formal_parameter
DW_AT_location (DW_OP_reg0 RAX)
DW_AT_abstract_origin (0x00000000000000bc "a")
llvm-dwarfdump --statistics would have not reported b as such.
Patch by Dimitrije Milosevic.
Differential revision: https://reviews.llvm.org/D113465
This patch adds parallel processing of chunks. When reducing very large
inputs, e.g. functions with 500k basic blocks, processing chunks in
parallel can significantly speed up the reduction.
To allow modifying clones of the original module in parallel, each clone
needs their own LLVMContext object. To achieve this, each job parses the
input module with their own LLVMContext. In case a job successfully
reduced the input, it serializes the result module as bitcode into a
result array.
To ensure parallel reduction produces the same results as serial
reduction, only the first successfully reduced result is used, and
results of other successful jobs are dropped. Processing resumes after
the chunk that was successfully reduced.
The number of threads to use can be configured using the -j option.
It defaults to 1, which means serial processing.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D113857
Add an alias of `addi [x], zero, imm` to generate pseudo
instruction li, which makes assembly mush more readable.
For existed tests, users can update them by running script
`llvm/utils/update_llc_test_checks.py`.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D112692
There are various tests that need to be adjusted to test the right
thing with instruction referencing -- usually because the internal
representation of variables is different, sometimes that location lists
change. This patch makes a bunch of tests explicitly not use
instruction referencing, so that a check-llvm test with instruction
referencing on for x86_64 doesn't fail. I'll then convert the tests
to have instr-ref CHECK lines, and similar.
Differential Revision: https://reviews.llvm.org/D113194
This is another attempt at D59351 which attempted to add --update-section, but
with some heuristics for adjusting segment/section offsets/sizes in the event
the data copied into the section is larger than the original size of the section.
We are opting to not support this case. GNU's objcopy was able to do this because
the linker and objcopy are tightly coupled enough that segment reformatting was
simpler. This is not the case with llvm-objcopy and lld where they like to be separated.
This will attempt to copy data into the section without changing any other
properties of the parent segment (if the section is part of one).
Differential Revision: https://reviews.llvm.org/D112116
Textual LLVM IR files are much bigger and take longer to write to disk.
To avoid the extra cost incurred by serializing to text, this patch adds
an option to save temporary files as bitcode instead.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D113858
Adding `-use-loadable-segment-as-base` to allow use of first loadable segment for calculating offset. By default first executable segment is used for calculating offset. The switch helps compatibility with unsymbolized profile generated from older tools.
Differential Revision: https://reviews.llvm.org/D113727
The file seems to have been put in the wrong place in its original
commit. This had the effect of marking all llvm-nm tests as unsupported,
unless X86 was enabled, even for tests that weren't X86 specific.
Fixes https://bugs.llvm.org/show_bug.cgi?id=52506.
Reviewed by: mstorsjo
Differential Revision: https://reviews.llvm.org/D113882
Add support for demangling Rust v0 symbols to llvm-nm by reusing
nonMicrosoftDemangle which supports both Itanium and Rust mangling.
Reviewed By: dblaikie, jhenderson
Differential Revision: https://reviews.llvm.org/D111937
Metadata operands tend to require special conditions, especially on dbg
intrinsics. We also don't have a zero value for metadata.
Replacing callee operands is a little weird, since calling undef/null
doesn't make sense. It also causes tons of invalid reductions when
reducing calls to intrinsics since only arguments to intrinsics can be
of the metadata type.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D113532
Add a new "operands-skip" pass whose goal is to remove instructions in the middle of dependency chains. For instance:
```
%baseptr = alloca i32
%arrayidx = getelementptr i32, i32* %baseptr, i32 %idxprom
store i32 42, i32* %arrayidx
```
might be reducible to
```
%baseptr = alloca i32
%arrayidx = getelementptr ... ; now dead, together with the computation of %idxprom
store i32 42, i32* %baseptr
```
Other passes would either replace `%baseptr` with undef (operands, instructions) or move it to become a function argument (operands-to-args), both of which might fail the interestingness check.
In principle the implementation allows operand replacement with any value or instruction in the function that passes the filter constraints (same type, dominance, "more reduced"), but is limited in this patch to values that are directly or indirectly used to compute the current operand value, motivated by the example above. Additionally, function arguments are added to the candidate set which helps reducing the number of relevant arguments mitigating a concern of too many arguments mentioned in https://reviews.llvm.org/D110274#3025013.
Possible future extensions:
* Instead of requiring the same type, bitcast/trunc/zext could be automatically inserted for some more flexibility.
* If undef is added to the candidate set, "operands-skip"is able to produce any reduction that "operands" can do. Additional candidates might be zero and one, where the "reductive power" classification can prefer one over the other. If undefined behaviour should not be introduced, undef can be removed from the candidate set.
Recommit after resolving conflict with D112651 and reusing
shouldReduceOperand from D113532.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D111818
Add a new "operands-skip" pass whose goal is to remove instructions in the middle of dependency chains. For instance:
```
%baseptr = alloca i32
%arrayidx = getelementptr i32, i32* %baseptr, i32 %idxprom
store i32 42, i32* %arrayidx
```
might be reducible to
```
%baseptr = alloca i32
%arrayidx = getelementptr ... ; now dead, together with the computation of %idxprom
store i32 42, i32* %baseptr
```
Other passes would either replace `%baseptr` with undef (operands, instructions) or move it to become a function argument (operands-to-args), both of which might fail the interestingness check.
In principle the implementation allows operand replacement with any value or instruction in the function that passes the filter constraints (same type, dominance, "more reduced"), but is limited in this patch to values that are directly or indirectly used to compute the current operand value, motivated by the example above. Additionally, function arguments are added to the candidate set which helps reducing the number of relevant arguments mitigating a concern of too many arguments mentioned in https://reviews.llvm.org/D110274#3025013.
Possible future extensions:
* Instead of requiring the same type, bitcast/trunc/zext could be automatically inserted for some more flexibility.
* If undef is added to the candidate set, "operands-skip"is able to produce any reduction that "operands" can do. Additional candidates might be zero and one, where the "reductive power" classification can prefer one over the other. If undefined behaviour should not be introduced, undef can be removed from the candidate set.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D111818
`llvm-otool -tV foo.o` and `llvm-objdump --macho -d foo.o` would
previously fail on object files containing @TLVPPAGE or @TLVPPAGEOFF relocs.
Move llvm-objdump-specific test from
llvm/test/MC/AArch64/arm64-tls-modifiers-darwin.s to new
llvm/test/tools/llvm-objdump/MachO/disassemble-arm64-tlv-modifers.test
and put test for this fix to that new file.
Fixes PR52356.
Differential Revision: https://reviews.llvm.org/D112843
Summary: Fix the build failure on MSVC by making the `T` and `U` of the function
'T llvm::Optional<T>::getValueOr<llvm::yaml::Hex32>(U &&) const &' the same.
Differential Revision: https://reviews.llvm.org/D111487
This patch fixes some issues introduced in
https://reviews.llvm.org/D108942:
1) Remove the default label to fix the bots that use
-Werror,-Wcovered-switch-default
2) Modify the malformed test to fix the bots that are
built without zlib support
3) Modify some error messages in malformed profiles
Previously, if the basic-blocks delta pass tried to remove a basic block
that was the last basic block in a function that did not have external
or weak linkage, the resulting IR would become invalid. Since removing
the last basic block in a function is effectively identical to removing
the function body itself, we check explicitly for this case and if we
detect it, we run the same logic as in ReduceFunctionBodies.cpp
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D113486
Sometimes if llvm-reduce is interrupted in the middle of a delta pass on
a large file, it can take quite some time for the tool to start actually
doing new work if it is restarted again on the partially-reduced file. A
lot of time ends up being spent testing large chunks when these large
chunks are very unlikely to actually pass the interestingness test. In
cases like this, the tool will complete faster if the starting
granularity is reduced to a finer amount. Thus, we introduce a command
line flag that automatically divides the chunks into smaller subsets a
fixed, user-specified number of times prior to beginning the core loop.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D112651
If profile data is malformed for any kind of reason, we generate
an error that only reports "malformed instrumentation profile data"
without any further information. This patch extends InstrProfError
class to receive an optional error message argument, so that we can
do better error reporting.
Differential Revision: https://reviews.llvm.org/D108942
It is often useful to know which die is the parent of the current die.
This patch adds information about parent offset into the dump:
0x0000000b: DW_TAG_compile_unit
DW_AT_producer ("by_hand")
0x00000014: DW_TAG_base_type (0x0000000b) <<<<<<<<<<<<<<
DW_AT_name ("int")
Now it is easy to see which die is the parent of the current die.
This patch makes that behaviour to be default.
We can make it to be opt-in if neccessary.
This functionality differs from already existed "--show-parents"
in that sence that parent information is shown for all dies and
only link to the immediate parent is shown.
Differential Revision: https://reviews.llvm.org/D113406
Summary:
This patch adds yaml2obj supporting for the auxiliary
file header of XCOFF.
Reviewed By: DiggerLin, jhenderson
Differential Revision: https://reviews.llvm.org/D111487
A new tool that compares TargetLibraryInfo's opinion of the availability
of library function calls against the functions actually exported by a
specified set of libraries. Can be helpful in verifying the correctness
of TLI for a given target, and avoid mishaps such as had to be addressed
in D107509 and 94b4598d.
The tool currently supports ELF object files only, although it's unlikely
to be hard to add support for other formats.
Re-commits 62dd488 with changes to use pre-generated objects, as not all
bots have ld.lld available.
Differential Revision: https://reviews.llvm.org/D111358
A new tool that compares TargetLibraryInfo's opinion of the availability
of library function calls against the functions actually exported by a
specified set of libraries. Can be helpful in verifying the correctness
of TLI for a given target, and avoid mishaps such as had to be addressed
in D107509 and 94b4598d.
The tool currently supports ELF object files only, although it's unlikely
to be hard to add support for other formats.
Differential Revision: https://reviews.llvm.org/D111358
I am planning to upstream MachOObjectFile code to support Darwin
chained fixups. In order to test the new parser features we need a way
to produce correct (and incorrect) chained fixups. Right now the only
tool that can produce them is the Darwin linker. To avoid having to
check in binary files, this patch allows obj2yaml to print a hexdump
of the raw LINKEDIT and DATA segment, which both allows to
bootstrap the parser and enables us to easily create malformed inputs
to test error handling in the parser.
This patch adds two new options to obj2yaml:
-raw-data-segment
-raw-linkedit-segment
Differential Revision: https://reviews.llvm.org/D113234
Replace the description and file names for this argument. As far as I understand
this is a positional argument and I don't believe this changes breaks any existing
interfaces.
Reviewed By: hctim, MaskRay
Differential Revision: https://reviews.llvm.org/D113316
Matching a recent clang change I've made, now 'int[3]' is formatted
without the space between the type and array bound. This commit updates
libDebugInfoDWARF/llvm-dwarfdump to match that formatting.
Previously we assume there're some non-executing sections at the bottom of the text section so that we won't hit the array's bound. But on BOLTed binary, it turned out .bolt section is at the bottom of text section which can be profiled, then it crash llvm-profgen. This change try to fix it.
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D113238
Actually we can, for now, remove the explicit "operator" handling
entirely - since clang currently won't try to flag any of these as
rebuildable. That seems like a reasonable state for now, but it could be
narrowed down to only apply to conversion operators, most likely - but
would need more nuance for op> and op>> since they would be incorrectly
flagged as already having their template arguments (due to the trailing
'>').
As seen in https://bugs.llvm.org/show_bug.cgi?id=52213 llvm-objdump
asserts if either the --debug-vars or the --dwarf options are provided
with invalid values. As suggested, this fix adds use of a default value
to these options and errors when given bad input.
Differential Revision: https://reviews.llvm.org/D112183
Two things in this diff:
1) Warn on the invalid range, currently three types of checking, see the detailed message in the code.
2) In some situation, llvm-profgen gives lots of warnings on the truncated stacks which is noisy. This change provides a switch to `--show-detailed-warning` to skip the warnings. Alternatively, we use a summary for those warning and show the percentage of cases with those issues.
Example of warning summary.
```
warning: 0.05%(1120/2428958) cases with issue: Profile context truncated due to missing probe for call instruction.
warning: 0.00%(2/178637) cases with issue: Range does not belong to any functions, likely from external function.
```
Reviewed By: hoy
Differential Revision: https://reviews.llvm.org/D111902
Notes generated in OpenBSD core files provide additional information
about the kernel state and CPU registers. These notes are described
in core.5, which can be viewed here: https://man.openbsd.org/core.5
Differential Revision: https://reviews.llvm.org/D111966
(Second try. Need to link against CodeGen and MC libs.)
The llvm-reduce tool has been extended to operate on MIR (import, clone and
export). Current limitation is that only a single machine function is
supported. A single reducer pass that operates on machine instructions (while
on SSA-form) has been added. Additional MIR specific reducer passes can be
added later as needed.
Differential Revision: https://reviews.llvm.org/D110527
The llvm-reduce tool has been extended to operate on MIR (import, clone and
export). Current limitation is that only a single machine function is
supported. A single reducer pass that operates on machine instructions (while
on SSA-form) has been added. Additional MIR specific reducer passes can be
added later as needed.
Differential Revision: https://reviews.llvm.org/D110527
Allow filling zero count for all the function ranges even there is no samples hitting that function. Add a switch for this.
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D112858
with tests generated by yaml2obj.
Summary: Because yaml2obj supports basic transforming for XCOFF,
some of the binary inputs used in the tests of llvm-readobj
can be replaced with yaml files.
Reviewed By: shchenz
Differential Revision: https://reviews.llvm.org/D111699
Like probe-based profile, the total samples is the sum of all its body samples. This patch fix it by a post-processing update for the line-number based profile. Tested it on our internal services, results showed no performance change.
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D112672
Previous implementation of populating profile symbol list is wrong, it only included the profiled symbols. Actually it should use all symbols, here this switches to use the symbols from debug info. Also turned the flag off by default.
Reviewed By: wenlei, hoy
Differential Revision: https://reviews.llvm.org/D111824
This was checked while counting but not actually when doing the reduction, resulting in crashes.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D112766
Functions in different sections (common in object files - inline
functions, -ffunction-sections, etc) can't overlap, so factor in the
section when diagnosing overlapping address ranges.
This removes a major false-positive when running llvm-dwarfdump on
unlinked code.
Adding support to the CS preinliner to trim cold base profiles. This makes trimming consistent with the inline decision made by the preinliner. Also disable the existing profile merger when preinliner is on unless explicitly specified.
Reviewed By: wenlei, wlei
Differential Revision: https://reviews.llvm.org/D112489
GNU sed offers the `,+4d` to delete the line a next four lines, but BSD
sed doesn't seem to support it (at least in macOS 10.15, but seems to do
in my 11.6 version).
Replace the usage of the extension with the equivalent syntax that works
both in BSD and GNU sed. I don't have a macOS 10.15 to check, but this
works in both my macOS 11.6 and Linux machines.
Differential Revision: https://reviews.llvm.org/D112583
**Context:**
This is a second attempt at introducing signature regeneration to llvm-objcopy. In this diff: https://reviews.llvm.org/D109840, a script was introduced to test
the validity of a code signature. In this diff: https://reviews.llvm.org/D109803 (now reverted), an effort was made to extract the signature generation behavior out of LLD into a common location for use in llvm-objcopy. In this diff: https://reviews.llvm.org/D109972 it was decided that there was no appropriate common location and that a small amount of duplication to bring signature generation to llvm-objcopy would be better. This diff introduces this duplication.
**Summary**
Prior to this change, if a LC_CODE_SIGNATURE load command
was included in the binary passed to llvm-objcopy, the command and
associated section were simply copied and included verbatim in the
new binary. If rest of the binary was modified at all, this results
in an invalid Mach-O file. This change regenerates the signature
rather than copying it.
The code_signature_lc.test test was modified to include the yaml
representation of a small signed MachO executable in order to
effectively test the signature generation.
Reviewed By: alexander-shaposhnikov, #lld-macho
Differential Revision: https://reviews.llvm.org/D111164
This change allows the unsymbolized profile as input. The unsymbolized profile is created by `llvm-profgen` with `--skip-symbolization` and it's after the sample aggregation but before symbolization , so it has much small file size. It can be used for sample merging and trimming, also is useful for debugging or adding test cases. A switch `--unsymbolized-profile=file-patch` is added for this.
Format of unsymbolized profile:
```
[context stack1] # If it's a CS profile
number of entries in RangeCounter
from_1-to_1:count_1
from_2-to_2:count_2
......
from_n-to_n:count_n
number of entries in BranchCounter
src_1->dst_1:count_1
src_2->dst_2:count_2
......
src_n->dst_n:count_n
[context stack2]
......
```
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D111750
Some dwarf loaders in LLVM are hard-coded to only accept 4-byte and 8-byte address sizes. This patch generalizes acceptance into `DWARFContext::isAddressSizeSupported` and provides a common way to generate rejection errors.
The MSP430 target has been given new tests to cover dwarf loading cases that previously failed due to 2-byte addresses.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D111953
We incorrectly use duplication factor for total samples even though we already accumulate samples instead of taking MAX. It causes profile to have bloated total samples for functions with loop unrolled or vectorized. The change fix the issue for total sample, head sample and call target samples.
Differential Revision: https://reviews.llvm.org/D112042
Having non-undef constants in a final llvm-reduce output is nicer than
having undefs.
This splits the existing reduce-operands pass into three, one which does
the same as the current pass of reducing to undef, and two more to
reduce to the constant 1 and the constant 0. Do not reduce to undef if
the operand is a ConstantData, and do not reduce 0s to 1s.
Reducing GEP operands very frequently causes invalid IR (since types may
not match up if we index differently into a struct), so don't touch GEPs.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D111765
Both ports are required for BitTest ops. Update the uops counts + port usage based off the most recent llvm-exegesis captures and what Intel AoM / Agner reports as well.