llvm-project/bolt/lib/Passes/CMakeLists.txt

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

58 lines
1.1 KiB
CMake
Raw Normal View History

add_llvm_library(LLVMBOLTPasses
ADRRelaxationPass.cpp
Aligner.cpp
AllocCombiner.cpp
AsmDump.cpp
BinaryPasses.cpp
BinaryFunctionCallGraph.cpp
CacheMetrics.cpp
CallGraph.cpp
CallGraphWalker.cpp
DataflowAnalysis.cpp
DataflowInfoManager.cpp
ExtTSPReorderAlgorithm.cpp
FrameAnalysis.cpp
FrameOptimizer.cpp
HFSort.cpp
HFSortPlus.cpp
IdenticalCodeFolding.cpp
IndirectCallPromotion.cpp
Inliner.cpp
Instrumentation.cpp
JTFootprintReduction.cpp
LongJmp.cpp
LoopInversionPass.cpp
LivenessAnalysis.cpp
MCF.cpp
[BOLT] Support for lite mode with relocations Summary: Add '-lite' support for relocations for improved processing time, memory consumption, and more resilient processing of binaries with embedded assembly code. In lite relocation mode, BOLT will skip full processing of functions without a profile. It will run scanExternalRefs() on such functions to discover external references and to create internal relocations to update references to optimized functions. Note that we could have relied on the compiler/linker to provide relocations for function references. However, there's no assurance that all such references are reported. E.g., the compiler can resolve inter-procedural references internally, leaving no relocations for the linker. The scan process takes about <10 seconds per 100MB of code on modern hardware. It's a reasonable overhead to live with considering the flexibility it provides. If BOLT fails to scan or disassemble a function, .e.g., due to a data object embedded in code, or an unsupported instruction, it enables a patching mode to guarantee that the failed function will call optimized/moved versions of functions. The patching happens at original function entry points. '-skip=<func1,func2,...>' option now can be used to skip processing of arbitrary functions in the relocation mode. With '-use-old-text' or '-strict' we require all functions to be processed. As such, it is incompatible with '-lite' option, and '-skip' option will only disable optimizations of listed functions, not their disassembly and emission. (cherry picked from FBD22040717)
2020-06-15 15:15:47 +08:00
PatchEntries.cpp
PettisAndHansen.cpp
PLTCall.cpp
RegAnalysis.cpp
[BOLT] Add REX prefix rebalancing pass Summary: Add a pass to rebalance the usage of REX prefixes, moving them from the hot code path to the cold path whenever possible. To do this, we rank the usage frequency of each register and exchange an X86 classic reg with an extended one (which requires a REX prefix) whenever the classic register is being used less times than the extended one. There are two versions of this pass: regular one will only consider RBX as classic and R12-R15 as extended registers because those are callee-saved, which means their scope is local to the function and therefore they can be easily interchanged within the function without further consequences. The aggressive version relies on liveness analysis to detect if the value of a register is being used as a caller-saved value (written to without being read first), which also is eligible for reallocation. However, it showed limited results and is not the default option because it is expensive. Currently, this pass does not update debug info. This means that if a substitution is made, the AT_LOCATION of a variable inside a function may be outdated and GDB will display the wrong value if you ask it to print the value of the affected variable. Updating DWARF involves a painful task of writing a new DWARF expression parser/writer similar to the one we already have for CFI expressions. I'll defer the task of writing this until we determine this optimization is enabled in production. So far, it is experimental to be combined with other optimizations to help us find a new set of optimizations that is beneficial. (cherry picked from FBD6476659)
2017-11-15 10:20:40 +08:00
RegReAssign.cpp
ReorderAlgorithm.cpp
ReorderFunctions.cpp
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-21 11:03:31 +08:00
ReorderData.cpp
ShrinkWrapping.cpp
[BOLT][non-reloc] Change function splitting in non-relocation mode Summary: This diff applies to non-relocation mode mostly. In this mode, we are limited by original function boundaries, i.e. if a function becomes larger after optimizations (e.g. because of the newly introduced branches) then we might not be able to write the optimized version, unless we split the function. At the same time, we do not benefit from function splitting as we do in the relocation mode since we are not moving functions/fragments, and the hot code does not become more compact. For the reasons described above, we used to execute multiple re-write attempts to optimize the binary and we would only split functions that were too large to fit into their original space. After the first attempt, we would know functions that did not fit into their original space. Then we would re-run all our passes again feeding back the function information and forcefully splitting such functions. Some functions still wouldn't fit even after the splitting (mostly because of the branch relaxation for conditional tail calls that does not happen in non-relocation mode). Yet we have emitted debug info as if they were successfully overwritten. That's why we had one more stage to write the functions again, marking failed-to-emit functions non-simple. Sadly, there was a bug in the way 2nd and 3rd attempts interacted, and we were not splitting the functions correctly and as a result we were emitting less optimized code. One of the reasons we had the multi-pass rewrite scheme in place, was that we did not have an ability to precisely estimate the code size before the actual code emission. Recently, BinaryContext obtained such functionality, and now we can use it instead of relying on the multi-pass rewrite. This eliminates redundant work of re-running the same function passes multiple times. Because function splitting runs before a number of optimization passes that run on post-CFG state (those rely on the splitting pass), we cannot estimate the non-split code size with 100% accuracy. However, it is good enough for over 99% of the cases to extract most of the performance gains for the binary. As a result of eliminating the multi-pass rewrite, the processing time in non-relocation mode with `-split-functions=2` is greatly reduced. With debug info update, it is less than half of what it used to be. New semantics for `-split-functions=<n>`: -split-functions - split functions into hot and cold regions =0 - do not split any function =1 - in non-relocation mode only split functions too large to fit into original code space =2 - same as 1 (backwards compatibility) =3 - split all functions (cherry picked from FBD17362607)
2019-09-12 06:42:22 +08:00
SplitFunctions.cpp
StackAllocationAnalysis.cpp
StackAvailableExpressions.cpp
StackPointerTracking.cpp
StackReachingUses.cpp
StokeInfo.cpp
TailDuplication.cpp
ThreeWayBranch.cpp
ValidateInternalCalls.cpp
VeneerElimination.cpp
RetpolineInsertion.cpp
[BOLT rebase] Rebase fixes on top of LLVM Feb2018 Summary: This commit includes all code necessary to make BOLT working again after the rebase. This includes a redesign of the EHFrame work, cherry-pick of the 3dnow disassembly work, compilation error fixes, and port of the debug_info work. The macroop fusion feature is not ported yet. The rebased version has minor changes to the "executed instructions" dynostats counter because REP prefixes are considered a part of the instruction it applies to. Also, some X86 instructions had the "mayLoad" tablegen property removed, which BOLT uses to identify and account for loads, thus reducing the total number of loads reported by dynostats. This was observed in X86::MOVDQUmr. TRAP instructions are not terminators anymore, changing our CFG. This commit adds compensation to preserve this old behavior and minimize tests changes. debug_info sections are now slightly larger. The discriminator field in the line table is slightly different due to a change upstream. New profiles generated with the other bolt are incompatible with this version because of different hash values calculated for functions, so they will be considered 100% stale. This commit changes the corresponding test to XFAIL so it can be updated. The hash function changes because it relies on raw opcode values, which change according to the opcodes described in the X86 tablegen files. When processing HHVM, bolt was observed to be using about 800MB more memory in the rebased version and being about 5% slower. (cherry picked from FBD7078072)
2018-02-07 07:00:23 +08:00
LINK_LIBS
${LLVM_PTHREAD_LIB}
LINK_COMPONENTS
AsmPrinter
BOLTCore
BOLTUtils
MC
Support
)