This reverts commit 65df29f9318ac13a633c0ce13b2b0bccf06e79ca.
AS suggested by @rsmith here: https://reviews.llvm.org/rL345839
I'm reverting this and solving the initial problem in a different way.
llvm-svn: 348595
Summary:
With a new switch we may be able to print to stderr if a new TU is being loaded
during CTU. This is very important for higher level scripts (like CodeChecker)
to be able to parse this output so they can create e.g. a zip file in case of
a Clang crash which contains all the related TU files.
Reviewers: xazax.hun, Szelethus, a_sidorin, george.karpenkov
Subscribers: whisperity, baloghadamsoftware, szepet, rnkovacs, a.sidorin, mikhail.ramalho, donat.nagy, dkrupp,
Differential Revision: https://reviews.llvm.org/D55135
llvm-svn: 348594
This patch introduces a new instinsic `@llvm.experimental.widenable_condition`
that allows explicit representation for guards. It is an alternative to using
`@llvm.experimental.guard` intrinsic that does not contain implicit control flow.
We keep finding places where `@llvm.experimental.guard` is not supported or
treated too conservatively, and there are 2 reasons to that:
- `@llvm.experimental.guard` has memory write side effect to model implicit control flow,
and this sometimes confuses passes and analyzes that work with memory;
- Not all passes and analysis are aware of the semantics of guards. These passes treat them
as regular throwing call and have no idea that the condition of guard may be used to prove
something. One well-known place which had caused us troubles in the past is explicit loop
iteration count calculation in SCEV. Another example is new loop unswitching which is not
aware of guards. Whenever a new pass appears, we potentially have this problem there.
Rather than go and fix all these places (and commit to keep track of them and add support
in future), it seems more reasonable to leverage the existing optimizer's logic as much as possible.
The only significant difference between guards and regular explicit branches is that guard's condition
can be widened. It means that a guard contains (explicitly or implicitly) a `deopt` block successor,
and it is always legal to go there no matter what the guard condition is. The other successor is
a guarded block, and it is only legal to go there if the condition is true.
This patch introduces a new explicit form of guards alternative to `@llvm.experimental.guard`
intrinsic. Now a widenable guard can be represented in the CFG explicitly like this:
%widenable_condition = call i1 @llvm.experimental.widenable.condition()
%new_condition = and i1 %cond, %widenable_condition
br i1 %new_condition, label %guarded, label %deopt
guarded:
; Guarded instructions
deopt:
call type @llvm.experimental.deoptimize(<args...>) [ "deopt"(<deopt_args...>) ]
The new intrinsic `@llvm.experimental.widenable.condition` has semantics of an
`undef`, but the intrinsic prevents the optimizer from folding it early. This form
should exploit all optimization boons provided to `br` instuction, and it still can be
widened by replacing the result of `@llvm.experimental.widenable.condition()`
with `and` with any arbitrary boolean value (as long as the branch that is taken when
it is `false` has a deopt and has no side-effects).
For more motivation, please check llvm-dev discussion "[llvm-dev] Giving up using
implicit control flow in guards".
This patch introduces this new intrinsic with respective LangRef changes and a pass
that converts old-style guards (expressed as intrinsics) into the new form.
The naming discussion is still ungoing. Merging this to unblock further items. We can
later change the name of this intrinsic.
Reviewed By: reames, fedor.sergeev, sanjoy
Differential Revision: https://reviews.llvm.org/D51207
llvm-svn: 348593
Summary:
This patch adds the scaffolding necessary for lldb to recognise symbol
files generated by breakpad. These (textual) files contain just enough
information to be able to produce a backtrace from a crash
dump. This information includes:
- UUID, architecture and name of the module
- line tables
- list of symbols
- unwind information
A minimal breakpad file could look like this:
MODULE Linux x86_64 0000000024B5D199F0F766FFFFFF5DC30 a.out
INFO CODE_ID 00000000B52499D1F0F766FFFFFF5DC3
FILE 0 /tmp/a.c
FUNC 1010 10 0 _start
1010 4 4 0
1014 5 5 0
1019 5 6 0
101e 2 7 0
PUBLIC 1010 0 _start
STACK CFI INIT 1010 10 .cfa: $rsp 8 + .ra: .cfa -8 + ^
STACK CFI 1011 $rbp: .cfa -16 + ^ .cfa: $rsp 16 +
STACK CFI 1014 .cfa: $rbp 16 +
Even though this data would normally be considered "symbol" information,
in the current lldb infrastructure it is assumed every SymbolFile object
is backed by an ObjectFile instance. So, in order to better interoperate
with the rest of the code (particularly symbol vendors).
In this patch I just parse the breakpad header, which is enough to
populate the UUID and architecture fields of the ObjectFile interface.
The rough plan for followup patches is to expose the individual parts of
the breakpad file as ObjectFile "sections", which can then be used by
other parts of the codebase (SymbolFileBreakpad ?) to vend the necessary
information.
Reviewers: clayborg, zturner, lemo, amccarth
Subscribers: mgorny, fedor.sergeev, markmentovai, lldb-commits
Differential Revision: https://reviews.llvm.org/D55214
llvm-svn: 348592
When we had dynamic call frames (i.e. sp adjustment around each call) we
were including that adjustment into offsets calculated based on r6, even
though it's only sp that changes. This led to incorrect stack slot
accesses.
llvm-svn: 348591
Summary:
...that fires when running completion inside an argument of
UnresolvedMemberExpr (see the added test).
The assertion that fires is from Sema::TryObjectArgumentInitialization:
assert(FromClassification.isLValue());
This happens because Sema::AddFunctionCandidates does not account for
object types which are pointers. It ends up classifying them incorrectly.
All usages of the function outside code completion are used to run
overload resolution for operators. In those cases the object type being
passed is always a non-pointer type, so it's not surprising the function
did not expect a pointer in the object argument.
However, code completion reuses the same function and calls it with the
object argument coming from UnresolvedMemberExpr, which can be a pointer
if the member expr is an arrow ('->') access.
Extending AddFunctionCandidates to allow pointer object types does not
seem too crazy since all the functions down the call chain can properly
handle pointer object types if we properly classify the object argument
as an l-value, i.e. the classification of the implicitly dereferenced
pointer.
Reviewers: kadircet
Reviewed By: kadircet
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D55331
llvm-svn: 348590
Summary:
We plan to introduce additional CTU related lit test. Since lit may run the
tests in parallel, it is not safe to use the same directory (%T) for these
tests. It is safe to use however test case specific directories (%t).
Reviewers: xazax.hun, a_sidorin
Subscribers: rnkovacs, dkrupp, Szelethus, gamesh411, cfe-commits
Differential Revision: https://reviews.llvm.org/D55129
llvm-svn: 348587
Adds fatal errors for any target that does not support the Tiny or Kernel
codemodels by rejigging the getEffectiveCodeModel calls.
Differential Revision: https://reviews.llvm.org/D50141
llvm-svn: 348585
Summary: This line is longer than 80 characters.
Subscribers: llvm-commits, jakehehrlich
Differential Revision: https://reviews.llvm.org/D55419
llvm-svn: 348580
Summary: This line is longer than 80 characters.
Subscribers: llvm-commits, jakehehrlich
Differential Revision: https://reviews.llvm.org/D55419
llvm-svn: 348578
It was failing as below. Adding a triple seems to help.
--
: 'RUN: at line 2'; /work/llvm.combined/build.release/bin/llvm-mca -march=aarch64 -mcpu=exynos-m1 -resource-pressure=false < /work/llvm.combined/llvm/test/tools/llvm-mca/AArch64/Exynos/direct-branch.s | /work/llvm.combined/build.release/bin/FileCheck /work/llvm.combined/llvm/test/tools/llvm-mca/AArch64/Exynos/direct-branch.s -check-prefixes=ALL,M1
: 'RUN: at line 3'; /work/llvm.combined/build.release/bin/llvm-mca -march=aarch64 -mcpu=exynos-m3 -resource-pressure=false < /work/llvm.combined/llvm/test/tools/llvm-mca/AArch64/Exynos/direct-branch.s | /work/llvm.combined/build.release/bin/FileCheck /work/llvm.combined/llvm/test/tools/llvm-mca/AArch64/Exynos/direct-branch.s -check-prefixes=ALL,M3
--
Exit Code: 1
Command Output (stderr):
--
/work/llvm.combined/llvm/test/tools/llvm-mca/AArch64/Exynos/direct-branch.s:36:12: error: M1-NEXT: expected string not found in input
^
<stdin>:21:2: note: scanning from here
1 0 0.25 b Ltmp0
^
--
llvm-svn: 348577
has_key has been removed in Python 3. The in comparison operator can be used
instead.
Differential Revision: https://reviews.llvm.org/D55310
llvm-svn: 348576
Summary:
Allow clients to suppress setup of default RPATHs in designated library targets. This is used in LLDB when emitting liblldb as a framework bundle, which itself doesn't load further RPATH-dependent libraries.
This follows the approach in add_llvm_executable().
Reviewers: aprantl, JDevlieghere, davide, friss
Reviewed By: aprantl
Subscribers: mgorny, lldb-commits, llvm-commits, #lldb
Differential Revision: https://reviews.llvm.org/D55316
llvm-svn: 348573
Summary:
The patch is to add the VSX register support for inline assembly. After this
patch, we can use VSX register in inline assembly clobber list without error.
Reviewed By: jsji, nemanjai
Differential Revision: https://reviews.llvm.org/D55192
llvm-svn: 348572
In some cases different alignments for function might be used to save
space e.g. thumb mode with -Oz will try to use 2 byte function
alignment. Similar patch that fixed this in other areas exists here
https://reviews.llvm.org/D46110
This was approved previously https://reviews.llvm.org/D55115 (r348215)
but when committed it caused failures on the sanitizer buildbots when
building llvm with clang (containing this patch). This is now fixed
because I've added a check to see if getting the parent module returns
null if it does then set the alignment to 0.
Differential Revision: https://reviews.llvm.org/D55115
llvm-svn: 348571
Thunks that return member pointers via sret are broken due to using temporary
storage for the return value on the stack and then passing that pointer to a
tail call, violating the rule that a tail call can't access allocas in the
caller (see bug).
Since r90526, we put aggregate return values directly in the sret slot, but
this doesn't apply to member pointers which are considered scalar.
Unless I'm missing something subtle, we should be able to always use the sret
slot directly for indirect return values.
Differential revision: https://reviews.llvm.org/D55371
llvm-svn: 348569
Summary:
This change builds upon D54989, which removes memory allocation from the
critical path of the profiling implementation. This also changes the API
for the profile collection service, to take ownership of the memory and
associated data structures per-thread.
The consolidation of the memory allocation allows us to do two things:
- Limits the amount of memory used by the profiling implementation,
associating preallocated buffers instead of allocating memory
on-demand.
- Consolidate the memory initialisation and cleanup by relying on the
buffer queue's reference counting implementation.
We find a number of places which also display some problematic
behaviour, including:
- Off-by-factor bug in the allocator implementation.
- Unrolling semantics in cases of "memory exhausted" situations, when
managing the state of the function call trie.
We also add a few test cases which verify our understanding of the
behaviour of the system, with important edge-cases (especially for
memory-exhausted cases) in the segmented array and profile collector
unit tests.
Depends on D54989.
Reviewers: mboerger
Subscribers: dschuff, mgorny, dmgreen, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D55249
llvm-svn: 348568
The current algorithm that collects live/dead/inloop blocks relies on some invariants
related to RPO and PO traversals. In particular, the important fact it requires is that
the only loop's latch is the first block in PO traversal. It also relies on fact that during
RPO we visit all prececessors of a block before we visit this block (backedges ignored).
If a loop has irreducible non-loop cycle inside, both these assumptions may break.
This patch adds detection for this situation and prohibits the terminator folding
for loops with irreducible CFG.
We can in theory support this later, for this some algorithmic changes are needed.
Besides, irreducible CFG is not a frequent situation and we can just don't bother.
Thanks @uabelho for finding this!
Differential Revision: https://reviews.llvm.org/D55357
Reviewed By: skatkov
llvm-svn: 348567
Fix assert about using an undefined physical register in machine instruction verify pass.
The reason is that register flag undef is missing when doing transformation from If Conversion Pass.
```
Bad machine code: Using an undefined physical register
- function: func_65
- basic block: %bb.0 entry (0x10024740738)
- instruction: BCLR killed $cr5lt, implicit $lr8, implicit $rm, implicit undef $x3
- operand 0: killed $cr5lt
LLVM ERROR: Found 1 machine code errors.
```
There are also other existing testcases with same issue. So I add -verify-machineinstrs option to open verifying.
Differential Revision: https://reviews.llvm.org/D55408
llvm-svn: 348566
This reverts commit r348455, with some additional changes:
- Work-around deficiency of gcc-4.8 by duplicating the implementation of
`AppendEmplace` in `Append`, but instead of using brace-init for the
copy construction, use a placement new explicitly calling the copy
constructor.
llvm-svn: 348563
When CodeExtractor outlines values which are used by the original
function, it must store those values in some in-out parameter. This
store instruction must not be inserted in between a PHI and an EH pad
instruction, as that results in invalid IR.
This fixes the following verifier failure seen while outlining within
ObjC methods with live exit values:
The unwind destination does not have an exception handling instruction!
%call35 = invoke i8* bitcast (i8* (i8*, i8*, ...)* @objc_msgSend to i8* (i8*, i8*)*)(i8* %exn.adjusted, i8* %1)
to label %invoke.cont34 unwind label %lpad33, !dbg !4183
The unwind destination does not have an exception handling instruction!
invoke void @objc_exception_throw(i8* %call35) #12
to label %invoke.cont36 unwind label %lpad33, !dbg !4184
LandingPadInst not the first non-PHI instruction in the block.
%3 = landingpad { i8*, i32 }
catch i8* null, !dbg !1411
rdar://46540815
llvm-svn: 348562
And mark it as a public header so it will get copied
into the LLDB.framework. A handful of "api" tests were
failing because they couldn't find this file.
llvm-svn: 348561
in one packet from 1k bytes to 16k bytes. Sending a large file to an
iOS device directly connected by USB cable, to lldb-server running in
platform mode, this speeds up the file xfer by 77%. Sending the file
in 32k blocks speeds up the file xfer by 80% versus 1k blocks, starting
with 16k to make sure we don't have any problems with android testing.
We may not have the same perf characteristics over ethernet, but with
USB it's faster to send fewer larger packets than many small packets.
llvm-svn: 348557
Windows provides a Yield function-like macro that allows a thread to
yield the CPU. However, this conflicts with `Yield` in swift. Undefine
`Yield` to allow building lldb with swift support.
llvm-svn: 348556
If this is not a valid way to assign an SDLoc, then we get this
wrong all over SDAG.
I don't know enough about the SDAG to explain this. IIUC, theoretically,
debug info is not supposed to affect codegen. But here it has clearly
affected 3 different targets, and the x86 change is an actual improvement.
llvm-svn: 348552
Change the ELF YAML implementation of TextAPI so NeededLibs uses flow
sequence vector correctly instead of overriding the YAML implementation
for std::vector<std::string>>.
This should fix the test failure with the LLVM_LINK_LLVM_DYLIB build mentioned in D55381.
Still passes existing tests that cover this.
Differential Revision: https://reviews.llvm.org/D55390
llvm-svn: 348551
We shouldn't care about the debug location for a node that
we're creating, but attaching the root of the pattern should
be the best effort. (If this is not true, then we are doing
it wrong all over the SDAG).
This is no-functional-change-intended, and there are no
regression test diffs...and that's what I expected. But
there's a similar line above this diff, where those
assumptions apparently do not hold.
llvm-svn: 348550
DemandedBits and BDCE currently only support scalar integers. This
patch extends them to also handle vector integer operations. In this
case bits are not tracked for individual vector elements, instead a
bit is demanded if it is demanded for any of the elements. This matches
the behavior of computeKnownBits in ValueTracking and
SimplifyDemandedBits in InstCombine.
The getDemandedBits() method can now only be called on instructions that
have integer or vector of integer type. Previously it could be called on
any sized instruction (even if it was not particularly useful). The size
of the return value is now always the scalar size in bits (while
previously it was the type size in bits).
Differential Revision: https://reviews.llvm.org/D55297
llvm-svn: 348549
Summary:
The call is duplicated in the handlers of all Expr subclasses.
This change makes it easy to split statement handling out to
TextNodeDumper.
Reviewers: aaron.ballman
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D55339
llvm-svn: 348546