![]() The per-PSB packet decoding logic was wrong because it was assuming that pt_insn_get_sync_offset was being udpated after every PSB. Silly me, that is not true. It returns the offset of the PSB packet after invoking pt_insn_sync_forward regardless of how many PSBs are visited later. Instead, I'm now following the approach described in https://github.com/intel/libipt/blob/master/doc/howto_libipt.md#parallel-decode for parallel decoding, which is basically what we need. A nasty error that happened because of this is that when we had two PSBs (A and B), the following was happening 1. PSB A was processed all the way up to the end of the trace, which includes PSB B. 2. PSB B was then processed until the end of the trace. The instructions emitted by step 2. were also emitted as part of step 1. so our trace had duplicated chunks. This problem becomes worse when you many PSBs. As part of making sure this diff is correct, I added some other features that are very useful. - Added a "synchronization point" event to the TraceCursor, so we can inspect when PSBs are emitted. - Removed the single-thread decoder. Now the per-cpu decoder and single-thread decoder use the same code paths. - Use the query decoder to fetch PSBs and timestamps. It turns out that the pt_insn_sync_forward of the instruction decoder can move past several PSBs (this means that we could skip some TSCs). On the other hand, the pt_query_sync_forward method doesn't skip PSBs, so we can get more accurate sync events and timing information. - Turned LibiptDecoder into PSBBlockDecoder, which decodes single PSB blocks. It is the fundamental processing unit for decoding. - Added many comments, asserts and improved error handling for clarity. - Improved DecodeSystemWideTraceForThread so that a TSC is emitted always before a cpu change event. This was a bug that was annoying me before. - SplitTraceInContinuousExecutions and FindLowestTSCInTrace are now using the query decoder, which can identify precisely each PSB along with their TSCs. - Added an "only-events" option to the trace dumper to inspect only events. I did extensive testing and I think we should have an in-house testing CI. The LLVM buildbots are not capable of supporting testing post-mortem traces of hundreds of megabytes. I'll leave that for later, but at least for now the current tests were able to catch most of the issues I encountered when doing this task. A sample output of a program that I was single stepping is the following. You can see that only one PSB is emitted even though stepping happened! ``` thread #1: tid = 3578223 0: (event) trace synchronization point [offset = 0x0xef0] a.out`main + 20 at main.cpp:29:20 1: 0x0000000000402479 leaq -0x1210(%rbp), %rax 2: (event) software disabled tracing 3: 0x0000000000402480 movq %rax, %rdi 4: (event) software disabled tracing 5: (event) software disabled tracing 6: 0x0000000000402483 callq 0x403bd4 ; std::vector<int, std::allocator<int>>::vector at stl_vector.h:391:7 7: (event) software disabled tracing a.out`std::vector<int, std::allocator<int>>::vector() at stl_vector.h:391:7 8: 0x0000000000403bd4 pushq %rbp 9: (event) software disabled tracing 10: 0x0000000000403bd5 movq %rsp, %rbp 11: (event) software disabled tracing ``` This is another trace of a long program with a few PSBs. ``` (lldb) thread trace dump instructions -E -f thread #1: tid = 3603082 0: (event) trace synchronization point [offset = 0x0x80] 47417: (event) software disabled tracing 129231: (event) trace synchronization point [offset = 0x0x800] 146747: (event) software disabled tracing 246076: (event) software disabled tracing 259068: (event) trace synchronization point [offset = 0x0xf78] 259276: (event) software disabled tracing 259278: (event) software disabled tracing no more data ``` Differential Revision: https://reviews.llvm.org/D131630 |
||
---|---|---|
.github | ||
bolt | ||
clang | ||
clang-tools-extra | ||
cmake | ||
compiler-rt | ||
cross-project-tests | ||
flang | ||
libc | ||
libclc | ||
libcxx | ||
libcxxabi | ||
libunwind | ||
lld | ||
lldb | ||
llvm | ||
llvm-libgcc | ||
mlir | ||
openmp | ||
polly | ||
pstl | ||
runtimes | ||
third-party | ||
utils | ||
.arcconfig | ||
.arclint | ||
.clang-format | ||
.clang-tidy | ||
.git-blame-ignore-revs | ||
.gitignore | ||
.mailmap | ||
CONTRIBUTING.md | ||
README.md | ||
SECURITY.md |
README.md
The LLVM Compiler Infrastructure
This directory and its sub-directories contain the source code for LLVM, a toolkit for the construction of highly optimized compilers, optimizers, and run-time environments.
The README briefly describes how to get started with building LLVM. For more information on how to contribute to the LLVM project, please take a look at the Contributing to LLVM guide.
Getting Started with the LLVM System
Taken from here.
Overview
Welcome to the LLVM project!
The LLVM project has multiple components. The core of the project is itself called "LLVM". This contains all of the tools, libraries, and header files needed to process intermediate representations and convert them into object files. Tools include an assembler, disassembler, bitcode analyzer, and bitcode optimizer. It also contains basic regression tests.
C-like languages use the Clang frontend. This component compiles C, C++, Objective-C, and Objective-C++ code into LLVM bitcode -- and from there into object files, using LLVM.
Other components include: the libc++ C++ standard library, the LLD linker, and more.
Getting the Source Code and Building LLVM
The LLVM Getting Started documentation may be out of date. The Clang Getting Started page might have more accurate information.
This is an example work-flow and configuration to get and build the LLVM source:
-
Checkout LLVM (including related sub-projects like Clang):
-
git clone https://github.com/llvm/llvm-project.git
-
Or, on windows,
git clone --config core.autocrlf=false https://github.com/llvm/llvm-project.git
-
-
Configure and build LLVM and Clang:
-
cd llvm-project
-
cmake -S llvm -B build -G <generator> [options]
Some common build system generators are:
Ninja
--- for generating Ninja build files. Most llvm developers use Ninja.Unix Makefiles
--- for generating make-compatible parallel makefiles.Visual Studio
--- for generating Visual Studio projects and solutions.Xcode
--- for generating Xcode projects.
Some common options:
-
-DLLVM_ENABLE_PROJECTS='...'
and-DLLVM_ENABLE_RUNTIMES='...'
--- semicolon-separated list of the LLVM sub-projects and runtimes you'd like to additionally build.LLVM_ENABLE_PROJECTS
can include any of: clang, clang-tools-extra, cross-project-tests, flang, libc, libclc, lld, lldb, mlir, openmp, polly, or pstl.LLVM_ENABLE_RUNTIMES
can include any of libcxx, libcxxabi, libunwind, compiler-rt, libc or openmp. Some runtime projects can be specified either inLLVM_ENABLE_PROJECTS
or inLLVM_ENABLE_RUNTIMES
.For example, to build LLVM, Clang, libcxx, and libcxxabi, use
-DLLVM_ENABLE_PROJECTS="clang" -DLLVM_ENABLE_RUNTIMES="libcxx;libcxxabi"
. -
-DCMAKE_INSTALL_PREFIX=directory
--- Specify for directory the full path name of where you want the LLVM tools and libraries to be installed (default/usr/local
). Be careful if you install runtime libraries: if your system uses those provided by LLVM (like libc++ or libc++abi), you must not overwrite your system's copy of those libraries, since that could render your system unusable. In general, using something like/usr
is not advised, but/usr/local
is fine. -
-DCMAKE_BUILD_TYPE=type
--- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug. -
-DLLVM_ENABLE_ASSERTIONS=On
--- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types).
-
cmake --build build [-- [options] <target>]
or your build system specified above directly.-
The default target (i.e.
ninja
ormake
) will build all of LLVM. -
The
check-all
target (i.e.ninja check-all
) will run the regression tests to ensure everything is in working order. -
CMake will generate targets for each tool and library, and most LLVM sub-projects generate their own
check-<project>
target. -
Running a serial build will be slow. To improve speed, try running a parallel build. That's done by default in Ninja; for
make
, use the option-j NNN
, whereNNN
is the number of parallel jobs to run. In most cases, you get the best performance if you specify the number of CPU threads you have. On some Unix systems, you can specify this with-j$(nproc)
.
-
-
For more information see CMake.
-
Consult the Getting Started with LLVM page for detailed information on configuring and compiling LLVM. You can visit Directory Layout to learn about the layout of the source code tree.
Getting in touch
Join LLVM Discourse forums, discord chat or #llvm IRC channel on OFTC.
The LLVM project has adopted a code of conduct for participants to all modes of communication within the project.