Summary:
Replacing uses of std::function pointers or refs, or Optional, to
function_ref, since the usage pattern allows that. If the function is
optional, using a default parameter value (nullptr). This led to a few
parameter reshufles, to push all optionals to the end of the parameter
list.
Reviewers: davidxl, dblaikie
Subscribers: arsenm, jvesely, nhaehnle, eraman, hiraditya, haicheng, kerbowa, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79917
We need to use it to handle <16 x double> indirect indexes
in the AMDGPU BE.
The only visible change from adding it is in ARM cost model.
To me it looks reasonable. With doubling a vector size it
quadruples the cost up to the size 8 and then it did only
double it. Now it also quadruples, which seems a logical
progression to me.
Actual AMDGPU code is to follow, this is a common part, plus
load/store legalization in the AMDGPU BE not to break what
works now.
Differential Revision: https://reviews.llvm.org/D79952
The fact that loads and stores can have the alignment missing is a
constant source of confusion: code that usually works can break down in
rare cases. So fix the LoadInst API so the alignment is never missing.
To reduce the number of changes required to make this work, IRBuilder
and certain LoadInst constructors will grab the module's datalayout and
compute the alignment automatically. This is the same alignment
instcombine would eventually apply anyway; we're just doing it earlier.
There's a minor risk that the way we're retrieving the datalayout
could break out-of-tree code, but I don't think that's likely.
This is the last in a series of patches, so most of the necessary
changes have already been merged.
Differential Revision: https://reviews.llvm.org/D77454
We already form PMULH when the shift is truncated. But we can
also do it from just a shift by extending the result.
Unfortunately, I get regressions if I try to replace the truncate
combine with this as we turn the truncate into a more complicated
sequence first. Then we are unable to combine that sequence with
the extend produced at the end of this combine.
Differential Revision: https://reviews.llvm.org/D79682
Enable clausing of memory loads on gfx10 by adding a new pass to insert
the s_clause instructions that mark the start of each hard clause.
Differential Revision: https://reviews.llvm.org/D79792
When I moved combineLoopMAddPattern to an IR pass. I didn't match the behavior of canReduceVMulWidth that was used in the SelectionDAG version. canReduceVMulWidth just calls computeSignBits and assumes a truncate is always profitable. The version I put in IR just looks for constants and zext/sext. Though I neglected to check the number of bits in input of the zext/sext.
This patch adds a check for the number of input bits to the sext/zext. And it adds a special case for add/sub with zext/sext inputs which can be handled by combineTruncatedArithmetic. Match the original SelectionDAG behavior appears to be a regression in some cases if the truncate isn't removed and becomes pack and permq. So enabling only this specific case is the conservative approach.
Differential Revision: https://reviews.llvm.org/D79909
This patch implements the final bits of CMSE code generation:
* emit special linker symbols
* restrict parameter passing to no use memory
* emit BXNS and BLXNS instructions for returns from non-secure entry
functions, and non-secure function calls, respectively
* emit code to save/restore secure floating-point state around calls
to non-secure functions
* emit code to save/restore non-secure floating-pointy state upon
entry to non-secure entry function, and return to non-secure state
* emit code to clobber registers not used for arguments and returns
* when switching to no-secure state
Patch by Momchil Velikov, Bradley Smith, Javed Absar, David Green,
possibly others.
Differential Revision: https://reviews.llvm.org/D76518
SplitCSR was only suppored for functions with CXX_FAST_TLS calling
convention. Clang only emits that calling convention for Darwin which is
no longer supported by the PowerPC backend. Another IR producer could
use the calling convention, but considering the calling convention is
meant to be an optimization and the codegen for SplitCSR can be
attrocious on Power (see the modifed lit test) it is best to remove it
and codegen CXX_FAST_TLS same as the C calling convention.
Differential Revision: https://reviews.llvm.org/D79018
SUnit represent a MachineInstr in post-regalloc scheduling but SDNode
in pre-regalloc scheduling. when pass -enable-hexagon-sdnode-sched to
Hexagon backend with -O1 and above, this may cause an assertion failed.
Fixes PR45194.
Differential Revision: https://reviews.llvm.org/D76134
Expands on the enablement of the shouldSinkOperands() TLI hook in:
D79718
The last codegen/IR test diff shows what I suspected could happen - we were
sinking all splat shift operands into a loop. But that's not what we want in
general; we only want to sink the *shift amount* operand if it is a splat.
Differential Revision: https://reviews.llvm.org/D79827
This is the first checkin to support Marvell ThunderX3T110.
Initial definition of the micro-ops of the instructions in ThunderX3T110
is included.
Differential Revision: https://reviews.llvm.org/D78129
Currently the AsmPrinter cannot emit some floating point constant
expressions in global initializers. Avoid generating them.
Reviewers: dmgreen, t.p.northover, arsenm, efriedma, Gerolf
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D79865
SelectMOVRELOffset prevents peeling of a constant from an index
if final base could be negative. isBaseWithConstantOffset() succeeds
if a value is an "add" or "or" operator. In case of "or" it shall
be an add-like "or" which never changes a sign of the sum given a
non-negative offset. I.e. we can safely allow peeling if operator is
an "or".
Differential Revision: https://reviews.llvm.org/D79898
Under MVE a vdup will always take a gpr register, not a floating point
value. During DAG combine we convert the types to a bitcast to an
integer in an attempt to fold the bitcast into other instructions. This
is OK, but only works inside the same basic block. To do the same trick
across a basic block boundary we need to convert the type in
codegenprepare, before the splat is sunk into the loop.
This adds a convertSplatType function to codegenprepare to do that,
putting bitcasts around the splat to force the type to an integer. There
is then some adjustment to the code in shouldSinkOperands to handle the
extra bitcasts.
Differential Revision: https://reviews.llvm.org/D78728
Similar to fmul/fadd, we can sink a splat into a loop containing a fma
in order to use more register instruction variants. For that there are
also adjustments to the sinking code to handle more than 2 arguments.
Differential Revision: https://reviews.llvm.org/D78386
This patch adds a new TTI hook to allow targets to tell LSR that
a chain including some instruction is already profitable and
should not be optimized. This patch also adds an implementation
of this TTI hook for ARM so LSR doesn't optimize chains that include
the VCTP intrinsic.
Differential Revision: https://reviews.llvm.org/D79418
Summary:
In the assembler or inline assembler,
attempting to use an invalid fixup type
gives a crash with a segmentation fault.
__attribute__((naked))
void foo(void) {
__asm__("mov r9, :lower16:bar(prel31)");
}
This should give a proper error message when building for ARM or Thumb.
This brings it in line with AARCH64.
This fixes all 8 instances of llvm_unreachable("Unsupported Modifier");
in ARM/MCTargetDesc/ARMELFObjectWriter.cpp.
A test is provided for each instance.
Reviewers: llvm-commits, MarkMurrayARM
Reviewed By: MarkMurrayARM
Subscribers: kristof.beyls, hiraditya, danielkiss
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79782
Change-Id: I6971ba37f129cc453568fe71514ccb2ac9d16831
xsnegdp, xsabsdp and xsnabsdp can be used to operate on f32 operand.
This patch adds the missing patterns since we prefer VSX instructions
when available.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D75344
Legalizer should respect both command-line options or SDNode-level
fast-math flags.
Also, this patch propagates other flags during custom simplifying.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D79074
Summary:
The ppc-early-ret pass use the addReg() to add operand to the new
instruction, it can't reserve the flag of old operand. This has caused
machine verfications failed.
This patch use add() to instead of addReg().
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D77997
We can produce such vectors in the Promote Alloca pass,
but we are unable to use movrel to operate it and lower
via scratch. Making it legal makes SI_INDIRECT patterns
work.
There is more work to do in subsequent changes:
1. We initialize m0 twice to access each dword. It shall
be possible to only do it once and increment base register
number instead.
2. We also need v16i64/v16f64 but these first need to be
added to tablegen.
Differential Revision: https://reviews.llvm.org/D79808
SDAG suffers when it can't see that a funnel operand is a splat value
(due to single-basic-block visibility), so invert the normal loop
hoisting rules to move a splat op closer to its use.
This would be part 1 of an enhancement similar to D63233.
This is needed to re-fix PR37426:
https://bugs.llvm.org/show_bug.cgi?id=37426
...because we got better at canonicalizing IR to funnel shift intrinsics.
The existing CGP code for shift opcodes is likely overstepping what it was
intended to do, so that will be fixed in a follow-up.
Differential Revision: https://reviews.llvm.org/D79718
Summary:
The SPE doesn't have a 'fma' instruction, so the intrinsic becomes a
libcall. It really should become an expansion to two instructions, but
for some reason the compiler doesn't think that's as optimal as a
branch. Since this lowering is done after CTR is allocated for loops,
tell the optimizer that CTR may be used in this case. This prevents a
"Invalid PPC CTR loop!" assertion in the case that a fma() function call
is used in a C/C++ file, and clang converts it into an intrinsic.
Reviewed By: shchenz
Differential Revision: https://reviews.llvm.org/D78668
Summary:
This patch refactors handling of VarArgs in
X86TargetLowering::LowerFormalArguments.
That refactoring was requested while reviewing
D69372. Code related to varargs handling is removed
from X86TargetLowering::LowerFormalArguments and
is divided into smaller routines.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D74794
This patch folds redundant load immediates into a zero for instructions
which recognise this as the value zero and not the register. If the load
immediate is no longer in use it is then deleted.
This is already done in earlier passes but the ppc-mi-peephole allows for
a more general implementation.
Differential Revision: https://reviews.llvm.org/D69168
We have a couple main strategies for legalizing MULH.
-If the vXi16 type is legal, extend to do the full i16 multiply
and then shift and truncate the results.
-Use unpcks to split each 128 bit lane into high and low halves.a
For signed we have an extra case to split a v32i8 to v16i8 and then
use the extending to v16i16 strategy.
This patch proposes to use the unpck strategy instead. Which is
what we already do for unsigned.
This seems to be 1 instruction shorter when the RHS is constant
like the idiv case. It's 1 instruction longer for the smulo case.
But we're trading cross lane shuffles for inlane shuffles and a
shift.
Differential Revision: https://reviews.llvm.org/D79652
Summary:
In 2e24219d3c, a number of ARM pcrel fixups were resolved at assembly
time, to solve PR44929. This only covered little-endian ARM however, so
add similar fixups for big-endian ARM. Also extend the test case to
cover big-endian ARM.
Reviewers: hans, psmith, MaskRay
Reviewed By: psmith, MaskRay
Subscribers: kristof.beyls, hiraditya, danielkiss, emaste, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79774
Summary:
As proposed in https://github.com/WebAssembly/simd/pull/122. Since
these instructions are not yet merged to the SIMD spec proposal, this
patch makes them entirely opt-in by surfacing them only through LLVM
intrinsics and clang builtins. If these instructions are made
official, these intrinsics and builtins should be replaced with simple
instruction patterns.
Reviewers: aheejin
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D79742