Summary:
The `%diff_plist` lit substitution invokes `diff` with a non-portable
`-I` option. The intended effect can be achieved by normalizing the
inputs to `diff` beforehand. Such normalization can be done with
`grep -Ev`, which is also used by other tests.
This patch applies the change (adjusted for review comments) described
in http://lists.llvm.org/pipermail/cfe-dev/2019-April/061904.html
mechanically to the cases where the output file is piped to
`%diff_plist` via `cat`.
The changes were applied via a script, except that
`clang/test/Analysis/NewDelete-path-notes.cpp` and
`clang/test/Analysis/plist-macros-with-expansion.cpp` were each adjusted
for the line-continuation on the relevant `RUN` step.
Reviewers: NoQ, sfertile, xingxue, jasonliu, daltenty
Subscribers: xazax.hun, baloghadamsoftware, szepet, a.sidorin, mikhail.ramalho, Szelethus, donat.nagy, dkrupp, Charusso, jsji, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D62950
llvm-svn: 362996
As shown in PR41279, some basic blocks (such as catchswitch) cannot be
instrumented. This patch filters out these BBs in PGO instrumentation.
It also sets the profile count to the fail-to-instrument edge, so that we
can propagate the counts in the CFG.
Differential Revision: https://reviews.llvm.org/D62700
llvm-svn: 362995
Summary:
The `%diff_plist` lit substitution invokes `diff` with a non-portable
`-I` option. The intended effect can be achieved by normalizing the
inputs to `diff` beforehand. Such normalization can be done with
`grep -Ev`, which is also used by other tests.
This patch applies the change (adjusted for review comments) described
in http://lists.llvm.org/pipermail/cfe-dev/2019-April/061904.html to the
specific case shown in the list message. Mechanical changes to the other
affected files will follow in later patches.
Reviewers: NoQ, sfertile, xingxue, jasonliu, daltenty
Reviewed By: NoQ
Subscribers: xazax.hun, baloghadamsoftware, szepet, a.sidorin, mikhail.ramalho, Szelethus, donat.nagy, dkrupp, Charusso, jsji, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D62949
llvm-svn: 362994
There are two interesting sub-cases here. 1) Switching IVs is legal, but only in pre-increment form. and 2) Switching IVs is legal, and so is post-increment form.
llvm-svn: 362993
Summary:
As suggested in the review of D62949, this patch updates the plist
output to have a newline at the end of the file. This makes it so that
the plist output file qualifies as a POSIX text file, which increases
the consumability of the generated plist file in relation to various
tools.
Reviewers: NoQ, sfertile, xingxue, jasonliu, daltenty
Reviewed By: NoQ, xingxue
Subscribers: jsji, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D63041
llvm-svn: 362992
Summary:
vertical-line is not a BRE special character.
POSIX.1-2017 XBD Section 9.3.2 indicates that the interpretation of `\|`
is undefined. This patch uses EREs instead.
Additionally, the pattern is further fixed so that `SIZEOF` and `WIDTH`
macros are checked.
Reviewers: jlebar, daltenty, xingxue, jasonliu, tra
Reviewed By: tra
Subscribers: jfb, jsji, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D63029
llvm-svn: 362991
Summary:
For builds with LLVM_BUILD_LLVM_DYLIB=ON and BUILD_SHARED_LIBS=OFF
this change makes all symbols in the target specific libraries hidden
by default.
A new macro called LLVM_EXTERNAL_VISIBILITY has been added to mark symbols in these
libraries public, which is mainly needed for the definitions of the
LLVMInitialize* functions.
This patch reduces the number of public symbols in libLLVM.so by about
25%. This should improve load times for the dynamic library and also
make abi checker tools, like abidiff require less memory when analyzing
libLLVM.so
One side-effect of this change is that for builds with
LLVM_BUILD_LLVM_DYLIB=ON and LLVM_LINK_LLVM_DYLIB=ON some unittests that
access symbols that are no longer public will need to be statically linked.
Before and after public symbol counts (using gcc 8.2.1, ld.bfd 2.31.1):
nm before/libLLVM-9svn.so | grep ' [A-Zuvw] ' | wc -l
36221
nm after/libLLVM-9svn.so | grep ' [A-Zuvw] ' | wc -l
26278
Reviewers: chandlerc, beanz, mgorny, rnk, hans
Reviewed By: rnk, hans
Subscribers: Jim, hiraditya, michaelplatings, chapuni, jholewinski, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, sbc100, jgravelle-google, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, zzheng, edward-jones, mgrang, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, kristina, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D54439
llvm-svn: 362990
If the source is undef, then just don't do anything.
This matches SelectionDAG's behaviour in SelectionDAG.cpp.
Also add a test showing that we do the right thing here.
(irtranslator-memfunc-undef.ll)
Differential Revision: https://reviews.llvm.org/D63095
llvm-svn: 362989
The previous name "%lib" doesn't trigger any actual replacement. It
creates the file "./tools/lld/test/ELF/%lib.o" in the test directory.
llvm-svn: 362988
This is a followup to rL362981, in which I moved GetObjCLanguageRuntime
from Process to ObjCLanguageRuntime, renaming it to Get along the way.
llvm-svn: 362984
Summary:
This is the first of a few patches I have to improve the performance of dynamic module loading on Android.
In this first diff I'll describe the context of my main motivation and will then link to it in the other diffs to avoid repeating myself.
## Motivation
I have a few scenarios where opening a specific feature on an Android app takes around 40s when lldb is attached to it. The reason for that is because 40 modules are dynamicly loaded at that point in time and each one of them is taking ~1s.
## The problem
To learn about new modules we have a breakpoint on a linker function that is called twice whenever a module is loaded. One time just before it's loaded (so lldb can check which modules are loaded) and another right after it's loaded (so lldb can check again which ones are loaded and calculate the diference).
It's figuring out which modules are loaded that is taking quite some time. This is currently done by traversing the linked list of loaded shared libraries that the linker maintains in memory. Each item in the linked list requires its own `x` packet sent to the gdb server (this is android so the network also plays a part). In my scenario there are 400+ loaded libraries and even though we read 0x800 worth of bytes at a time we still make ~180 requests that end up taking 150-200ms.
We also do this twice, once before the module is loaded (state = eAdd) and another right after (state = eConsistent) which easly adds up to ~400ms per module.
## A solution
**Implement `xfer:libraries-svr4` in lldb-server:**
I noticed in the code that loads the new modules that it had support for the `xfer:libraries-svr4` packet (added ~4 years ago to support the ds2 debug server) but we didn't support it in lldb-server. This single packet returns an xml list of all the loaded modules by the process. The advantage is that there's no more need to make 180 requests to read the linked list. Additionally this new requests takes around 10ms.
**More efficient usage of the `xfer:libraries-svr4` packet in lldb:**
When `xfer:libraries-svr4` is available the Process class has a `LoadModules` function that requests this packet and then loads or unloads modules based on the current list of loaded modules by the process.
This is the function that is used by the DYLDRendezvous class to get the list of loaded modules before and after the module is loaded. However, this is really not needed since the LoadModules function already loaded or unloaded the modules accordingly. I changed this strategy to call LoadModules only once (after the process has loaded the module).
**Bugs**
I found a few issues in lldb while implementing this and have submitted independent patches for them.
I tried to devide this into multiple logical patches to make it easier to review and discuss.
## Tests
I wanted to put these set of diffs up before having all the tests up and running to start having them reviewed from a techical point of view. I'm also having some trouble making the tests running on linux so I need more time to make that happen.
# This diff
The `xfer` packages follow the same protocol, they are requested with `xfer:<object>:<read|write>:<annex>:<offset,length>` and a return that starts with `l` or `m` depending if the offset and length covers the entire data or not. Before implementing the `xfer:libraries-svr4` I refactored the `xfer:auxv` to generically handle xfer packets so we can easly add new ones.
The overall structure of the function ends up being:
* Parse the packet into its components: object, offset etc.
* Depending on the object do its own logic to generate the data.
* Return the data based on its size, the requested offset and length.
Reviewers: clayborg, xiaobai, labath
Reviewed By: labath
Subscribers: mgorny, krytarowski, lldb-commits
Tags: #lldb
Differential Revision: https://reviews.llvm.org/D62499
llvm-svn: 362982
Summary:
In an effort to make Process more language agnostic, I removed
GetCPPLanguageRuntime from Process. I'm following up now with an equivalent
change for ObjC.
Differential Revision: https://reviews.llvm.org/D63052
llvm-svn: 362981
Flesh out a collection of tests for switching to a dead IV within LFTR, both for the current miscompile, and for some cases which we should be able to handle via simple reasoning.
llvm-svn: 362976
This was discussed as part of D62880. The basic thought is that computing BE taken count after widening should produce (on average) an equally good backedge taken count as the one before widening. Since there's only one test in the suite which is impacted by this change, and it's essentially equivelent codegen, that seems to be a reasonable assertion. This change was separated from r362971 so that if this turns out to be problematic, the triggering piece is obvious and easily revertable.
For the nestedIV example from elim-extend.ll, we end up with the following BE counts:
BEFORE: (-2 + (-1 * %innercount) + %limit)
AFTER: (-1 + (sext i32 (-1 + %limit) to i64) + (-1 * (sext i32 %innercount to i64))<nsw>)
Note that before is an i32 type, and the after is an i64. Truncating the i64 produces the i32.
llvm-svn: 362975
This was found during HTM cleanup.
Adding a test for builtin_ttest would expose following issue.
*** Bad machine code: Illegal physical register for instruction ***
- function: test10
- basic block: %bb.0 entry (0xf0e57497b58)
- instruction: %5:crrc0 = TABORTWCI 0, $zero, 0
- operand 2: $zero
$zero is not a GPRC register.
LLVM ERROR: Found 1 machine code errors.
Differential Revision: https://reviews.llvm.org/D63079
llvm-svn: 362974
Summary:
When llvm-objcopy sorts sections during finalization, it only sorts based on the offset, which can cause the group section to come after the sections it contains. This causes link failures when using gold to link objects created by llvm-objcopy.
Fix this for now by copying GNU objcopy's behavior of placing SHT_GROUP sections first. In the future, we may want to remove this sorting entirely to more closely preserve the input file layout.
This fixes https://bugs.llvm.org/show_bug.cgi?id=42052.
Reviewers: jakehehrlich, jhenderson, MaskRay, espindola, alexshap
Reviewed By: MaskRay
Subscribers: phuongtrang148993, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62620
llvm-svn: 362973
This change does the plumbing to wire an ExitingBB parameter through the LFTR implementation, and reorganizes the code to work in terms of a set of individual loop exits. Most of it is fairly obvious, but there's one key complexity which makes it worthy of consideration. The actual multi-exit LFTR patch is in D62625 for context.
Specifically, it turns out the existing code uses the backedge taken count from before a IV is widened. Oddly, we can end up with a different (more expensive, but semantically equivelent) BE count for the loop when requerying after widening. For the nestedIV example from elim-extend, we end up with the following BE counts:
BEFORE: (-2 + (-1 * %innercount) + %limit)
AFTER: (-1 + (sext i32 (-1 + %limit) to i64) + (-1 * (sext i32 %innercount to i64))<nsw>)
This is the only test in tree which seems sensitive to this difference. The actual result of using the wider BETC on this example is that we actually produce slightly better code. :)
In review, we decided to accept that test change. This patch is structured to preserve the old behavior, but a separate change will immediate follow with the behavior change. (I wanted it separate for problem attribution purposes.)
Differential Revision: https://reviews.llvm.org/D62880
llvm-svn: 362971
These "dynamic_runtime_thunk" object files exist to create a weak alias
from 'foo' to 'foo_dll' for all weak sanitizer runtime symbols. The weak
aliases are implemented as /alternatename linker options in the
.drective section, so they are not actually in the symbol table. In
order to force the Visual C++ linker to load the object, even with
-wholearchive:, we have to provide at least one external symbol. Once we
do that, it will read the .drective sections and see the weak aliases.
Fixes PR42074
llvm-svn: 362970
The ELF gABI requires the tag values of DT_REL, DT_RELA and DT_JMPREL to be
treated as virtual addresses. They were treated as offsets. Fixes PR41832.
Differential Revision: https://reviews.llvm.org/D62972
llvm-svn: 362969
Summary:
Use a set in getReqFeatures() in RISCVCompressInstEmitter instead of a map
because the index we save is not needed.
This also fixes bug 41666.
Reviewers: llvm-commits, apazos, asb, nickdesaulniers
Reviewed By: asb
Subscribers: Jim, nickdesaulniers, rbar, johnrusso, simoncook, niosHD, kito-cheng, shiva0217, jrtc27, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61412
llvm-svn: 362968
The recently added cooperlake CPU has made our already ugly switch statement even worse. There's a CPU exclusion list around the bf16 feature in the cooper lake block. I worry that we'll have to keep adding new CPUs to that until bf16 intercepts a client space CPU. We have several other exclusion lists in other parts of the switch due to skylakeserver, cascadelake, and cooperlake not having sgx. Another for cannonlake not having clwb but having all other features from skx.
This removes all these special ifs at the cost of some duplication of features and a goto. I've copied all of the skx features into either cannonlake or icelakeclient(for clwb). And pulled sklyakeserver, cascadelake, and cooperlake out of the main inheritance chain into their own chain. At the end of skylakeserver we merge back into the main chain at skylakeclient but below sgx. I think this is at least easier to follow.
Differential Revision: https://reviews.llvm.org/D63018
llvm-svn: 362965
Bottleneck Analysis is one of the many views available in llvm-mca. Therefore,
it should be enabled when flag -all-views is passed in input to the tool.
llvm-svn: 362964
This behavior was added in r130928 for both FastISel and SD, and then
disabled in r131156 for FastISel.
This re-enables it for FastISel with the corresponding fix.
This is triggered only when FastISel can't lower the arguments and falls
back to SelectionDAG for it.
FastISel contains a map of "register fixups" where at the end of the
selection phase it replaces all uses of a register with another
register that FastISel sometimes pre-assigned. Code at the end of
SelectionDAGISel::runOnMachineFunction is doing the replacement at the
very end of the function, while other pieces that come in before that
look through the MachineFunction and assume everything is done. In this
case, the real issue is that the code emitting COPY instructions for the
liveins (physreg to vreg) (EmitLiveInCopies) is checking if the vreg
assigned to the physreg is used, and if it's not, it will skip the COPY.
If a register wasn't replaced with its assigned fixup yet, the copy will
be skipped and we'll end up with uses of undefined registers.
This fix moves the replacement of registers before the emission of
copies for the live-ins.
The initial motivation for this fix is to enable tail calls for
swiftself functions, which were blocked because we couldn't prove that
the swiftself argument (which is callee-save) comes from a function
argument (live-in), because there was an extra copy (vreg to vreg).
A few tests are affected by this:
* llvm/test/CodeGen/AArch64/swifterror.ll: we used to spill x21
(callee-save) but never reload it because it's attached to the return.
We now don't even spill it anymore.
* llvm/test/CodeGen/*/swiftself.ll: we tail-call now.
* llvm/test/CodeGen/AMDGPU/mubuf-legalize-operands.ll: I believe this
test was not really testing the right thing, but it worked because the
same registers were re-used.
* llvm/test/CodeGen/ARM/cmpxchg-O0.ll: regalloc changes
* llvm/test/CodeGen/ARM/swifterror.ll: get rid of a copy
* llvm/test/CodeGen/Mips/*: get rid of spills and copies
* llvm/test/CodeGen/SystemZ/swift-return.ll: smaller stack
* llvm/test/CodeGen/X86/atomic-unordered.ll: smaller stack
* llvm/test/CodeGen/X86/swifterror.ll: same as AArch64
* llvm/test/DebugInfo/X86/dbg-declare-arg.ll: stack size changed
Differential Revision: https://reviews.llvm.org/D62361
llvm-svn: 362963
Summary:
This CL adds the structures dealing with thread specific data for the
allocator. This includes the thread specific data structure itself and
two registries for said structures: an exclusive one, where each thread
will have its own TSD struct, and a shared one, where a pool of TSD
structs will be shared by all threads, with dynamic reassignment at
runtime based on contention.
This departs from the current Scudo implementation: we intend to make
the Registry a template parameter of the allocator (as opposed to a
single global entity), allowing various allocators to coexist with
different TSD registry models. As a result, TSD registry and Allocator
are tightly coupled.
This also corrects a couple of things in other files that I noticed
while adding this.
Reviewers: eugenis, vitalybuka, morehouse, hctim
Reviewed By: morehouse
Subscribers: srhines, mgorny, delcypher, jfb, #sanitizers, llvm-commits
Tags: #llvm, #sanitizers
Differential Revision: https://reviews.llvm.org/D62258
llvm-svn: 362962
Reflow the text for the table to make the table legible. This is purely
cosmetic, but makes understanding the contents of the table easier. NFCI.
llvm-svn: 362961
These caused a build failure because I managed not to notice they
depended on a later unpushed commit in my current stack. Sorry about
that.
llvm-svn: 362956
This should have been part of r362953, but I had a finger-trouble
incident and committed the old rather than new version of the patch.
Sorry.
llvm-svn: 362955
This adds support for the new family of conditional selection /
increment / negation instructions; the low-overhead branch
instructions (e.g. BF, WLS, DLS); the CLRM instruction to zero a whole
list of registers at once; the new VMRS/VMSR and VLDR/VSTR
instructions to get data in and out of 8.1-M system registers,
particularly including the new VPR register used by MVE vector
predication.
To support this, we also add a register name 'zr' (used by the CSEL
family to force one of the inputs to the constant 0), and operand
types for lists of registers that are also allowed to include APSR or
VPR (used by CLRM). The VLDR/VSTR instructions also need some new
addressing modes.
The low-overhead branch instructions exist in their own separate
architecture extension, which we treat as enabled by default, but you
can say -mattr=-lob or equivalent to turn it off.
Reviewers: dmgreen, samparker, SjoerdMeijer, t.p.northover
Reviewed By: samparker
Subscribers: miyuki, javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62667
llvm-svn: 362953
Summary: Dependence Analysis performs static checks to confirm validity
of delinearization. These checks often fail for 64-bit targets due to
type conversions and integer wrapping that prevent simplification of the
SCEV expressions. These checks would also fail at compile-time if the
lower bound of the loops are compile-time unknown.
Author: bmahjour
Reviewer: Meinersbur, jdoerfert, kbarton, dmgreen, fhahn
Reviewed By: Meinersbur, jdoerfert, dmgreen
Subscribers: fhahn, hiraditya, javed.absar, llvm-commits, Whitney,
etiotto
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D62610
llvm-svn: 362952
This commit reapplies r359426 (which was reverted in r360301 due to
performance problems) and rolls in D61940 to address the performance problem.
I've combined the two to avoid creating a span of slow-performance, and to
ease reverting if more problems crop up.
The summary of D61940: This patch removes the "ChangingRegs" facility in
DbgEntityHistoryCalculator, as its overapproximate nature can produce incorrect
variable locations. An unchanging register doesn't mean a variable doesn't
change its location.
The patch kills off everything that calculates the ChangingRegs vector.
Previously ChangingRegs spotted epilogues and marked registers as unchanging if
they weren't modified outside the epilogue, increasing the chance that we can
emit a single-location variable record. Without this feature,
debug-loc-offset.mir and pr19307.mir become temporarily XFAIL. They'll be
re-enabled by D62314, using the FrameDestroy flag to identify epilogues, I've
split this into two steps as FrameDestroy isn't necessarily supported by all
backends.
The logic for terminating variable locations at the end of a basic block now
becomes much more enjoyably simple: we just terminate them all.
Other test changes: inlined-argument.ll becomes XFAIL, but for a longer term.
The current algorithm for detecting that a variable has a single-location
doesn't work in this scenario (inlined function in multiple blocks), only other
bugs were making this test work. fission-ranges.ll gets slightly refreshed too,
as the location of "p" is now correctly determined to be a single location.
Differential Revision: https://reviews.llvm.org/D61940
llvm-svn: 362951