Handle DBG_INSTR_REF instructions in LiveDebugValues, to determine and
propagate variable locations. The logic is fairly straight forwards:
Collect a map of debug-instruction-number to the machine value numbers
generated in the first walk through the function. When building the
variable value transfer function and we see a DBG_INSTR_REF, look up the
instruction it refers to, and pick the machine value number it generates,
That's it; the rest of LiveDebugValues continues as normal.
Awkwardly, there are two kinds of instruction numbering happening here: the
offset into the block (which is how machine value numbers are determined),
and the numbers that we label instructions with when generating
DBG_INSTR_REFs.
I've also restructured the TransferTracker redefVar code a little, to
separate some DBG_VALUE specific operations into its own method. The
changes around redefVar should be largely NFC, while allowing
DBG_INSTR_REFs to specify a value number rather than just a location.
Differential Revision: https://reviews.llvm.org/D85771
As discussed in D89952,
instcombine can sometimes find a way to reduce similar patterns,
but it is incomplete.
InstSimplify uses the computeConstantRange() ValueTracking analysis
via simplifyICmpWithConstant(), so we just need to fill in the max
value of cttz to process any "icmp pred cttz(X), C" pattern (the
min value is initialized to zero automatically).
https://alive2.llvm.org/ce/z/Z_SLWZ
Follow-up to D89976.
As discussed in D89952,
instcombine can sometimes find a way to reduce similar patterns,
but it is incomplete.
InstSimplify uses the computeConstantRange() ValueTracking analysis
via simplifyICmpWithConstant(), so we just need to fill in the max
value of ctlz to process any "icmp pred ctlz(X), C" pattern (the
min value is initialized to zero automatically).
Follow-up to D89976.
As discussed in D89952,
instcombine can sometimes find a way to reduce similar patterns,
but it is incomplete.
InstSimplify uses the computeConstantRange() ValueTracking analysis
via simplifyICmpWithConstant(), so we just need to fill in the max
value of ctpop to process any "icmp pred ctpop(X), C" pattern (the
min value is initialized to zero automatically).
Differential Revision: https://reviews.llvm.org/D89976
matchBSwapOrBitReverse was hardcoded to just match bswaps - we're going to need to expose the ability to match bitreverse as well, so make this part of the function call.
This patch adds a specialized implementation of getIntrinsicInstrCost
and add initial cost-modeling for min/max vector intrinsics.
AArch64 NEON support umin/smin/umax/smax for vectors
<8 x i8>, <16 x i8>, <4 x i16>, <8 x i16>, <2 x i32> and <4 x i32>.
Notably, it does not support vectors with i64 elements.
This change by itself should have very little impact on codegen, but in
follow-up patches I plan to teach the vectorizers to consider using
those intrinsics on platforms where it is profitable, e.g. because there
is no general 'select'-like instruction.
The current cost returned should be better for throughput, latency and size.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D89953
This patch adjusts _when_ something happens in LiveDebugValues /
InstrRefBasedLDV, to make it more amenable to dealing with DBG_INSTR_REF
instructions. There's no functional change.
In the current InstrRefBasedLDV implementation, we collect the machine
value-number transfer function for blocks at the same time as the
variable-value transfer function. After solving machine value numbers, the
variable-value transfer function is updated so that DBG_VALUEs of live-in
registers have the correct value. The same would need to be done for
DBG_INSTR_REFs, to connect instruction-references with machine value
numbers.
Rather than writing more code for that, this patch separates the two: we
collect the (machine-value-number) transfer function and solve for
machine value numbers, then step through the MachineInstrs again collecting
the variable value transfer function. This simplifies things for the new
few patches.
Differential Revision: https://reviews.llvm.org/D85760
This patch copies @vsk's fix to instcombine from D85555 over to mem2reg. The
motivation and rationale are exactly the same: When mem2reg removes an alloca,
it erases the dbg.{addr,declare} instructions which refer to the alloca. It
would be better to instead remove all debug intrinsics which describe the
contents of the dead alloca, namely all dbg.value(<dead alloca>, ...,
DW_OP_deref)'s.
As far as I can tell, prior to D80264 these `dbg.value+deref`s would have been
silently dropped instead of being made `undef`, so we're just returning to
previous behaviour with these patches.
Testing:
`llvm-lit llvm/test` and `ninja check-clang` gave no unexpected failures. Added
3 tests, each of which covers a dbg.value deletion path in mem2reg:
mem2reg-promote-alloca-1.ll
mem2reg-promote-alloca-2.ll
mem2reg-promote-alloca-3.ll
The first is based on the dexter test inlining.c from D89543. This patch also
improves the debugging experience for loop.c from D89543, which suffers
similarly after arg promotion instead of inlining.
Use isKnownXY comparators when one of the operands can be with
scalable vectors or getFixedSize() for all the other cases.
This patch also does bug fixes for getPrimitiveSizeInBits by using
getFixedSize() near the places with the TypeSize comparison.
Differential Revision: https://reviews.llvm.org/D89703
We want to have a caching version of symbolic BE exit count
rather than recompute it every time we need it.
Differential Revision: https://reviews.llvm.org/D89954
Reviewed By: nikic, efriedma
Now there are two main classes in Value hierarchy, which support metadata,
these are Instruction and GlobalObject. They implement different APIs for
metadata manipulation, which however overlap. This change moves metadata
manipulation code into Value, so descendant classes can use this code for
their operations on metadata.
No functional changes intended.
Differential Revision: https://reviews.llvm.org/D67626
The devirtualization wrapper misses cases where if it wraps a pass
manager, an individual pass may devirtualize an indirect call created by
a previous pass. For example, inlining may create a new indirect call
which is devirtualized by instcombine. Currently the devirtualization
wrapper will not see that because it only checks cgscc edges at the very
beginning and end of the pass (manager) it wraps.
This fixes some tests testing this exact behavior in the legacy PM.
This piggybacks off of updateCGAndAnalysisManagerForPass()'s detection
of promoted ref to call edges.
This supercedes one of the previous mechanisms to detect
devirtualization by keeping track of potentially promoted call
instructions via WeakTrackingVHs.
There is one more existing way of detecting devirtualization, by
checking if the number of indirect calls has decreased and the number of
direct calls has increased in a function. It handles cases where calls
to functions without definitions are promoted, and some tests rely on
that. LazyCallGraph doesn't track edges to functions without
definitions so this part can't be removed in this change.
check-llvm and check-clang with -abort-on-max-devirt-iterations-reached
on by default doesn't show any failures outside of tests specifically
testing it so it doesn't needlessly rerun passes more than necessary.
(The NPM -O2/3 pipeline run the inliner/function simplification pipeline
under a devirtualization repeater pass up to 4 times by default).
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D89587
LD64 emits string tables which start with a space and a zero byte.
This diff adjusts StringTableBuilder for linked Mach-O binaries to match LD64's behavior.
Test plan: make check-all
Differential revision: https://reviews.llvm.org/D89561
An alwaysinline function may not get inlined in inliner-wrapper due to
the inlining order.
Previously for the following, the inliner would first inline @a() into @b(),
```
define void @a() {
entry:
call void @b()
ret void
}
define void @b() alwaysinline {
entry:
br label %for.cond
for.cond:
call void @a()
br label %for.cond
}
```
making @b() recursive and unable to be inlined into @a(), ending at
```
define void @a() {
entry:
call void @b()
ret void
}
define void @b() alwaysinline {
entry:
br label %for.cond
for.cond:
call void @b()
br label %for.cond
}
```
Running always-inliner first makes sure that we respect alwaysinline in more cases.
Fixes https://bugs.llvm.org/show_bug.cgi?id=46945.
Reviewed By: davidxl, rnk
Differential Revision: https://reviews.llvm.org/D86988
to their parent classes.
SampleProfileReaderExtBinary/SampleProfileWriterExtBinary specify the typical
section layout currently used by SampleFDO. Currently a lot of section
reader/writer stay in the two classes. However, as we expect to have more
types of SampleFDO profiles, we hope those new types of profiles can share
the common sections while configuring their own sections easily with minimal
change. That is why I move some common stuff from
SampleProfileReaderExtBinary/SampleProfileWriterExtBinary to
SampleProfileReaderExtBinaryBase/SampleProfileWriterExtBinaryBase so new
profiles class inheriting from the base class can reuse them.
Differential Revision: https://reviews.llvm.org/D89524
Move the code which adjusts the immediate/predicate on a G_ICMP to
AArch64PostLegalizerLowering.
This
- Reduces the number of places we need to test for optimized compares in the
selector. We know that the compare should have been simplified by the time it
hits the selector, so we can avoid testing this in selects, brconds, etc.
- Allows us to potentially fold more compares (previously, this optimization
was only done after calling `tryFoldCompare`, this may allow us to hit some more
TST cases)
- Simplifies the selection code in `emitIntegerCompare` significantly; we can
just use an emitSUBS function.
- Allows us to avoid checking that the predicate has been updated after
`emitIntegerCompare`.
Also add a utility header file for things that may be useful in the selector
and various combiners. No need for an implementation file at this point, since
it's just one constexpr function for now. I've run into a couple cases where
having one of these would be handy, so might as well add it here. There are
a couple functions in the selector that can probably be factored out into
here.
Differential Revision: https://reviews.llvm.org/D89823
There are a lot of combines in AArch64PostLegalizerCombiner which exist to
facilitate instruction matching in the selector. (E.g. matching for G_ZIP and
other shuffle vector pseudos)
It still makes sense to select these instructions at -O0.
Matching earlier in a combiner can reduce complexity in the selector
significantly. For example, a good portion of our selection code for compares
would be a lot easier to represent in a combine.
This patch moves matching combines into a "AArch64PostLegalizerLowering"
combiner which runs at all optimization levels.
Also, while we're here, improve the documentation for the
AArch64PostLegalizerCombiner, and fix up the filepath in its file comment.
And also add a 'r' which somehow got dropped from a bunch of function names.
https://reviews.llvm.org/D89820
Per asbirlea's comment, assert that only instructions, constants
and arguments are passed to this API. Simplify returning true
would not be correct for special Value subclasses like MemoryAccess.
Visited phi blocks only need to be added for the duration of the
recursive alias queries, they should not leak into following code.
Once again, while this also improves analysis precision, this is
mainly intended to clarify the applicability scope of VisitedPhiBBs.
We only need the VisitedPhiBBs to disambiguate comparisons of
values from two different loop iterations. If we're comparing
two phis from the same basic block in lock-step, the compared
values will always be on the same iteration.
While this also increases precision, this is mainly intended
to clarify the scope of VisitedPhiBBs.
This reverts commit 26ee8aff2b.
It's necessary to insert bitcast the pointer operand of a lifetime
marker if it has an opaque pointer type.
rdar://70560161
Testing reveals that lldb and gdb have some problems with supporting
DW_OP_convert - gdb with Split DWARF tries to resolve the CU-relative
DIE offset relative to the skeleton DIE. lldb tries to treat the offset
as absolute, which judging by the llvm-dsymutil support for
DW_OP_convert, I guess works OK in MachO? (though probably llvm-dsymutil
is producing invalid DWARF by resolving the relative reference to an
absolute one?).
Specifically this disables DW_OP_convert usage in DWARFv5 if:
* Tuning for GDB and using Split DWARF
* Tuning for LLDB and not targeting MachO
When performing a call slot optimization to a GEP destination, it
will currently usually fail, because the GEP is directly before the
memcpy and as such does not dominate the call. We should move it
above the call if that satisfies the domination requirement.
I think that a constant-index GEP is the only useful thing to move
here, as otherwise isDereferenceablePointer couldn't look through
it anyway. As such I'm not trying to generalize this further.
Differential Revision: https://reviews.llvm.org/D89623
Make member function const where possible, use LLVM_DEBUG to print debug traces
rather than a custom option, pass by reference to avoid null checking, ...
Reviewed By: fhann
Differential Revision: https://reviews.llvm.org/D89895
When InstCombine removes an alloca, it erases the dbg.{addr,declare}
instructions which refer to the alloca. It would be better to instead
remove all debug intrinsics which describe the contents of the dead
alloca, namely all dbg.value(<dead alloca>, ..., DW_OP_deref)'s.
This effectively undoes work performed in an InstCombine run earlier in
the pipeline by LowerDbgDeclare, which inserts DW_OP_deref dbg.values
before CallInst users of an alloca. The motivating example looks like:
```
define void @foo(i32 %0) {
%a = alloca i32 ; This alloca is erased.
store i32 %0, i32* %a
dbg.value(i32 %0, "arg0") ; This dbg.value survives.
dbg.value(i32* %a, "arg0", DW_OP_deref)
call void @trivially_inlinable_no_op(i32* %a)
ret void
}
```
If the DW_OP_deref dbg.value is not erased, it becomes dbg.value(undef)
after inlining, making "arg0" unavailable. But we already have dbg.value
descriptions of the alloca's value (from LowerDbgDeclare), so the
DW_OP_deref dbg.value cannot serve its purpose of describing an
initialization of the alloca by some callee. It invalidates other useful
dbg.values, causing large gaps in location coverage, so we should delete
it (even though doing so may cause stale dbg.values to appear, if
there's a dead store to `%a` in @trivially_inlinable_no_op).
OTOH, it wouldn't be correct to delete all dbg.value descriptions of an
alloca. Note that it's possible to describe a variable that takes on
different pointer values, e.g.:
```
void use(int *);
void t(int a, int b) {
int *local = &a; // dbg.value(i32* %a.addr, "local")
local = &b; // dbg.value(i32* undef, "local")
use(&a); // (note: %b.addr is optimized out)
local = &a; // dbg.value(i32* %a.addr, "local")
}
```
In this example, the alloca for "b" is erased, but we need to describe
the value of "local" as <unavailable> before the call to "use". This
prevents "local" from appearing to be equal to "&a" at the callsite.
rdar://66592859
Differential Revision: https://reviews.llvm.org/D85555
Non-instruction defs like arguments, constants or global values
always dominate all instructions/uses inside the function. This
case currently needs to be treated separately by the caller, see
https://reviews.llvm.org/D89623#inline-832818 for an example.
This patch makes the dominator tree APIs accept a Value instead of
an Instruction and always returns true for the non-Instruction case.
A complication here is that BasicBlocks are also Values. For that
reason we can't support the dominates(Value *, BasicBlock *)
variant, as it would conflict with dominates(BasicBlock *, BasicBlock *),
which has different semantics. For the other two APIs we assert
that the passed value is not a BasicBlock.
Differential Revision: https://reviews.llvm.org/D89632
Add new loop metadata amdgpu.loop.unroll.threshold to allow the initial AMDGPU
specific unroll threshold value to be specified on a loop by loop basis.
The intention is to be able to to allow more nuanced hints, e.g. specifying a
low threshold value to indicate that a loop may be unrolled if cheap enough
rather than using the all or nothing llvm.loop.unroll.disable metadata.
Differential Revision: https://reviews.llvm.org/D84779
It was already disabled under -Oz in
buildFunctionSimplificationPipeline(), but not in
buildModuleOptimizationPipeline()/addPGOInstrPasses().
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D89927
This commit marks i16 MULH as expand in AMDGPU backend,
which is necessary after the refactoring in D80485.
Differential Revision: https://reviews.llvm.org/D89965
Both FastRegAlloc and LiveDebugVariables/greedy need to cope with
DBG_INSTR_REFs. None of them actually need to take any action, other than
passing DBG_INSTR_REFs through: variable location information doesn't refer
to any registers at this stage.
LiveDebugVariables stashes the instruction information in a tuple, then
re-creates it later. This is only necessary as the register allocator
doesn't expect to see any debug instructions while it's working. No
equivalence classes or interval splitting is required at all!
No changes are needed for the fast register allocator, as it just ignores
debug instructions. The test added checks that both of them preserve
DBG_INSTR_REFs.
This also expands ScheduleDAGInstrs.cpp to treat DBG_INSTR_REFs the same as
DBG_VALUEs when rescheduling instructions around. The current movement of
DBG_VALUEs around is less than ideal, but it's not a regression to make
DBG_INSTR_REFs subject to the same movement.
Differential Revision: https://reviews.llvm.org/D85757