NFC. Extract IsShrinkable into a helper function, and
make Subtarget a member variable.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D99099
Change-Id: If4bc97a88a9ae4eb1df47e717345d46a6ed515bf
Previously we used selectImm for RV64 and isel patterns for
RV32. This should be NFC, but will allow RV32 and RV64 to share
improvements in the future. For example, it might be useful to
use BSETI from Zbs to make single bit constants.
Reviewed By: luismarques
Differential Revision: https://reviews.llvm.org/D98877
Coyp SchedRW from pseudos to real instructions so that llvm-mca has
access to it. This is NFC for normal compiler codegen, which schedules
pseudos not real instructions.
Add an llvm-mca test for some high latency double-precision instructions
as a smoke test.
Differential Revision: https://reviews.llvm.org/D99187
`FoldBranchToCommonDest()` has a certain budget (`-bonus-inst-threshold=`)
for bonus instruction duplication. And currently it calculates the cost
as-if it will actually duplicate into each predecessor.
But ignoring the budget, it won't always duplicate into each predecessor,
there are some correctness and profitability checks.
So when calculating the cost, we should first check into which blocks
will we *actually* duplicate, and only then use that block count
to do budgeting.
Before this patch, register writes were always invalidated by the
RegisterFile at instruction commit stage. So,
the RegisterFile was often losing the knowledge about the `execute
cycle` of writes already committed. While this was not problematic
for non-delayed reads, this was sometimes leading to inaccurate read
latency computations in the presence of negative read-advance cycles.
This patch fixes the issue by changing how the RegisterFile component
internally keeps track of the `execute cycle` information of each
write. On every instruction executed, the RegisterFile gets notified
by the RetireStage, so that it can internally record the execute
cycle of each executed write.
The `execute cycle` information is stored within WriteRef itself, and
it is not invalidated when the write is committed.
We clone bonus instructions to the end of the predecessor block,
and then use `SSAUpdater::RewriteUseAfterInsertions()`.
But that only deals with the cases where the use-to-be-rewritten
are either in different block from the def, or come after the def.
But in some loop cases, the external use may be in the beginning of
predecessor block, before the newly cloned bonus instruction.
`SSAUpdater::RewriteUseAfterInsertions()` does not deal with that.
Notably, the external use can't happen to be both in the same block
and *after* the newly-cloned instruction, because of the fold preconditions.
To properly handle these cases, when the use is in the same block,
we should instead use `SSAUpdater::RewriteUse()`.
TBN, they do the same thing for PHI users.
Fixes https://bugs.llvm.org/show_bug.cgi?id=49510
Likely Fixes https://bugs.llvm.org/show_bug.cgi?id=49689
This patch builds upon the initial BUILD_VECTOR work introduced in
D98700. It further optimizes the lowering of BUILD_VECTOR by using
VSELECT operations to effectively insert repeated elements into the
vector with relatively few instructions. This allows us to optimize more
BUILD_VECTORs without significantly increasing the size of the generated
code.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D98969
2nd try (original: 27ae17a6b0) with fix/test for crash. We must make
sure that TTI is available before trying to use it because it is not
required (might be another bug).
Original commit message:
This is one step towards solving:
https://llvm.org/PR49336
In that example, we disregard the recommended usage of builtin_expect,
so an expensive (unpredictable) branch is folded into another branch
that is guarding it.
Here, we read the profile metadata to see if the 1st (predecessor)
condition is likely to cause execution to bypass the 2nd (successor)
condition before merging conditions by using logic ops.
Differential Revision: https://reviews.llvm.org/D98898
Summary:
The IR is saved in its print form before each pass is started and a
signal handler is registered. If the compilation crashes, the signal
handler will print the saved IR to dbgs(). This option
can be modified using -print-module-scope to get the IR for the complete
module. Note that this option only works with the new pass manager.
Author: Jamie Schmeiser <schmeise@ca.ibm.com>
Reviewed By: aeubanks (Arthur Eubanks) yrouban (Yevgeny Rouban)
Differential Revision: https://reviews.llvm.org/D86657
This avoids temporary and memcpy call when computing large expressions.
It's basically some kind of poor man's expression template, but it seems easier
to maintain to have a single generic `apply` call instead of the whole
expression template machinery here.
Differential Revision: https://reviews.llvm.org/D98176
Follow up from D92955 and D83636. This patch makes the base cpp files
OMP.cpp and ACC.cpp normal files and they now include the XXX.inc file
generated by tablegen. This reduces the number of file generated by the
DirectiveEmitter backend and makes it closer to the proposal in D83636.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D93560
Apply the way createLocalIndirectStubsManagerBuilder() deals with unsupported achritectures to createLocalLazyCallThroughManager(). The returned call-through manager is dysfunctional: It runs into an unreachable as soon as a lazy JIT attempts to use it. However, this results in broader platform support for lli in default (greedy) ORC mode where no lazy materialization is required.
As noted in D98152, we need to patch SLP to avoid regressions when
we start canonicalizing to integer min/max intrinsics.
Most of the real work to make this possible was in:
7202f47508
Differential Revision: https://reviews.llvm.org/D98981
- Give unwieldy repeated expression a name
- Use a ranged `for` basic block iterator
Reviewed by: nikic, dexonsmith
Differential Revisision: https://reviews.llvm.org/D98957
Hoist early return for decl-only clones to before DIFinder
calculation.
Also fix an out of date assert message after invariants changed in
22a52dfddc.
Reviewed by: nikic, dexonsmith
Differential Revisision: https://reviews.llvm.org/D98957
A bug was found within InstCombineCasts where a function call
is only implemented to work with FixedVectors. This caused a
crash when a ScalableVector was passed to this function.
This commit introduces a regression test which recreates the
failure and a bug fix.
Differential Revision: https://reviews.llvm.org/D98351
The summary remarks are generated on a per-function basis. Using the
first instruction's location is sub-optimal for 2 reasons:
1. Sometimes the first instruction is missing !dbg
2. The location of the first instruction may be mis-leading.
Instead, just use the location of the function directly.
In function ConvertVPTBlocks(), it is assumed that every instruction
within a vector-predicated block is predicated. This is false for debug
instructions, used by LLVM.
Because of this, an assertion failure is reached when an input contains
debug instructions inside VPT blocks. In non-assert builds, an out of
bounds memory access took place.
The present patch properly covers the case of debug instructions.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D99075
In places where we create a ConstantVector whose elements are a
linear sequence of the form <start, start + 1, start + 2, ...>
I've changed the code to make use of CreateStepVector, which creates
a vector with the sequence <0, 1, 2, ...>, and a vector addition
operation. This patch is a non-functional change, since the output
from the vectoriser remains unchanged for fixed length vectors and
there are existing asserts that still fire when attempting to use
scalable vectors for vectorising induction variables.
In a later patch we will enable support for scalable vectors
in InnerLoopVectorizer::getStepVector(), which relies upon the new
stepvector intrinsic in IRBuilder::CreateStepVector.
Differential Revision: https://reviews.llvm.org/D97861
This patch adds a new llvm.experimental.stepvector intrinsic,
which takes no arguments and returns a linear integer sequence of
values of the form <0, 1, ...>. It is primarily intended for
scalable vectors, although it will work for fixed width vectors
too. It is intended that later patches will make use of this
new intrinsic when vectorising induction variables, currently only
supported for fixed width. I've added a new CreateStepVector
method to the IRBuilder, which will generate a call to this
intrinsic for scalable vectors and fall back on creating a
ConstantVector for fixed width.
For scalable vectors this intrinsic is lowered to a new ISD node
called STEP_VECTOR, which takes a single constant integer argument
as the step. During lowering this argument is set to a value of 1.
The reason for this additional argument at the codegen level is
because in future patches we will introduce various generic DAG
combines such as
mul step_vector(1), 2 -> step_vector(2)
add step_vector(1), step_vector(1) -> step_vector(2)
shl step_vector(1), 1 -> step_vector(2)
etc.
that encourage a canonical format for all targets. This hopefully
means all other targets supporting scalable vectors can benefit
from this too.
I've added cost model tests for both fixed width and scalable
vectors:
llvm/test/Analysis/CostModel/AArch64/neon-stepvector.ll
llvm/test/Analysis/CostModel/AArch64/sve-stepvector.ll
as well as codegen lowering tests for fixed width and scalable
vectors:
llvm/test/CodeGen/AArch64/neon-stepvector.ll
llvm/test/CodeGen/AArch64/sve-stepvector.ll
See this thread for discussion of the intrinsic:
https://lists.llvm.org/pipermail/llvm-dev/2021-January/147943.html
This patch adds an optimization for mask-vector BUILD_VECTOR nodes whose
elements are all constants or undef. It lowers such operations by
building up the vector via a series of integer operations, in which
multiple mask elements are inserted into a vector at a time via
i8/i16/i32/i64 element types. The final result is then bitcast from that
integer vector.
We restrict this optimization in certain circumstances when optimizing
for size. If we are required to use more than one integer insert
operation, then it will likely increase code size compared with using a
load from a constant pool.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D98860
The name is included when printing in DOT mode. Also print it in non-DOT
mode after 93a9d2de8f.
This will become more important to distinguish different plans once
VPlans are gradually refined.
Helps fix cases where we've splatted smaller types to a wider vector element type without needing the upper bits.
Avoid this on AVX512 targets as that can affect broadcast folding.
meetBDVState utility may sets the base pointer for the conflict state.
At this moment the base for conflict state does not have any meaning but
is used in comparison of BDV states. This comparison is used as an indicator
of progress done on iteration and RS4GC pass uses infinite loop to reach
fixed point.
As a result for added test on each iteration state for some phi nodes is updated
with other base value for conflict state and it indicates as a progress while
for conflict state there is no any progress more possible.
In reality the base value is transferred from one state to another and pass
detects the progress on these states.
The test is very fragile. The traversal order of states and operands of phi nodes
plays important role.
Reviewers: reames, dantrushin
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D99058
This patch adds a fallthrough bit to basic block metadata, indicating whether the basic block can fallthrough without taking any branches. The bit will help us avoid an intel LBR bug which results in occasional duplicate entries at the beginning of the LBR stack.
This patch uses `MachineBasicBlock::canFallThrough()` to set the bit. This is not a const method because it eventually calls `TargetInstrInfo::analyzeBranch`, but it calls this function with the default `AllowModify=false`. So we can either make the argument to the `getBBAddrMapMetadata` non-const, or we can use `const_cast` when calling `canFallThrough`. I decide to go with the latter since this is purely due to legacy code, and in general we should not allow the BasicBlock to be mutable during `getBBAddrMapMetadata`.
Reviewed By: tmsriram
Differential Revision: https://reviews.llvm.org/D96918
It doesn't look like any instructions have ever been assigned to these classes.
Reviewed By: HsiangKai
Differential Revision: https://reviews.llvm.org/D99050
This restores previous behaviour and is a step toward removing
unbundling entirely.
Reviewed By: foad, rampitec
Differential Revision: https://reviews.llvm.org/D99061
Lookup tables generate non PIC-friendly code, which requires dynamic relocation as described in:
https://bugs.llvm.org/show_bug.cgi?id=45244
This patch adds a new pass that converts lookup tables to relative lookup tables to make them PIC-friendly.
Differential Revision: https://reviews.llvm.org/D94355
Sometimes you want to get a type with same vector element count
as the current type, but different element type,
but there's no QOL wrapper to do that. Add one.
This reverts commit 27ae17a6b0.
There are bot failures that end with:
#4 0x00007fff7ae3c9b8 CrashRecoverySignalHandler(int) CrashRecoveryContext.cpp:0:0
#5 0x00007fff84e504d8 (linux-vdso64.so.1+0x4d8)
#6 0x00007fff7c419a5c llvm::TargetTransformInfo::getPredictableBranchThreshold() const (/home/buildbots/ppc64le-clang-multistage-test/clang-ppc64le-multistage/stage1.install/bin/../lib/libLLVMAnalysis.so.13git+0x479a5c)
...but not sure how to trigger that yet.
This is an alternative to D98391/D98585, playing things more
conservatively. If AllowRefinement == false, then we don't use
InstSimplify methods at all, and instead explicitly implement a
small number of non-refining folds. Most cases are handled by
constant folding, and I only had to add three folds to cover
our unit tests / test-suite. While this may lose some optimization
power, I think it is safer to approach from this direction, given
how many issues this code has already caused.
Differential Revision: https://reviews.llvm.org/D99027
These intrinsics don't need to be marked as arbitrary writing,
it's sufficient to write inaccessible memory (aka "side effect")
to preserve control dependencies. This means less special-casing
in BasicAA. This is intended as an alternative to D98925.
Differential Revision: https://reviews.llvm.org/D99022
This is one step towards solving:
https://llvm.org/PR49336
In that example, we disregard the recommended usage of builtin_expect,
so an expensive (unpredictable) branch is folded into another branch
that is guarding it.
Here, we read the profile metadata to see if the 1st (predecessor)
condition is likely to cause execution to bypass the 2nd (successor)
condition before merging conditions by using logic ops.
Differential Revision: https://reviews.llvm.org/D98898
This is no-functional-change intended (NFC), but needed to allow
optimizer passes to use the API. See D98898 for a proposed usage
by SimplifyCFG.
I'm simplifying the code by removing the cl::opt. That was added
back with the original commit in D19488, but I don't see any
evidence in regression tests that it was used. Target-specific
overrides can use the usual patterns to adjust as necessary.
We could also restore that cl::opt, but it was not clear to me
exactly how to do it in the convoluted TTI class structure.