Commit Graph

35057 Commits

Author SHA1 Message Date
Jay Foad 2dc3d1b313 [AMDGPU] Add some missing check prefixes 2020-07-17 12:56:29 +01:00
Sanjay Patel 7598ad3ead [x86] add tests for FMA with FMF; NFC 2020-07-17 07:52:29 -04:00
Sam Tebbs 6c348e4067 [HWLoops] Stop converting to a while loop when it would be unsafe to
There were cases where a do-while loop would be converted to a while
loop before finding out that it would be unsafe to expand the SCEV in
this situation and then bailing out of hardware loop conversion.

This patch checks if it would be unsafe to expand the SCEV and if so stops converting the do-while into a while, allowing conversion to a hardware loop.

Differential Revision: https://reviews.llvm.org/D83953
2020-07-17 11:47:08 +01:00
Jay Foad 760af7a074 [AMDGPU] Avoid splitting FLAT offsets in unsafe ways
As explained in the comment:

// For a FLAT instruction the hardware decides whether to access
// global/scratch/shared memory based on the high bits of vaddr,
// ignoring the offset field, so we have to ensure that when we add
// remainder to vaddr it still points into the same underlying object.
// The easiest way to do that is to make sure that we split the offset
// into two pieces that are both >= 0 or both <= 0.

In particular FLAT (as opposed to SCRATCH and GLOBAL) instructions have
an unsigned immediate offset field, so we can't use it to help split a
negative offset.

Differential Revision: https://reviews.llvm.org/D83394
2020-07-17 11:44:10 +01:00
Jay Foad 62fd7f767c [MachineScheduler] Fix the TopDepth/BotHeightReduce latency heuristics
tryLatency compares two sched candidates. For the top zone it prefers
the one with lesser depth, but only if that depth is greater than the
total latency of the instructions we've already scheduled -- otherwise
its latency would be hidden and there would be no stall.

Unfortunately it only tests the depth of one of the candidates. This can
lead to situations where the TopDepthReduce heuristic does not kick in,
but a lower priority heuristic chooses the other candidate, whose depth
*is* greater than the already scheduled latency, which causes a stall.

The fix is to apply the heuristic if the depth of *either* candidate is
greater than the already scheduled latency.

All this also applies to the BotHeightReduce heuristic in the bottom
zone.

Differential Revision: https://reviews.llvm.org/D72392
2020-07-17 11:02:13 +01:00
Florian Hahn e297006d6f [ScheduleDAG] Move DBG_VALUEs after first term forward.
MBBs are not allowed to have non-terminator instructions after the first
terminator. Currently in some cases (see the modified test),
EmitSchedule can add DBG_VALUEs after the last terminator, for example
when referring a debug value that gets folded into a TCRETURN
instruction on ARM.

This patch updates EmitSchedule to move inserted DBG_VALUEs just before
the first terminator. I am not sure if there are terminators produce
values that can in turn be used by a DBG_VALUE. In that case, moving the
DBG_VALUE might result in referencing an undefined register. But in any
case, it seems like currently there is no way to insert a proper DBG_VALUEs
for such registers anyways.

Alternatively it might make sense to just remove those extra DBG_VALUES.

I am not too familiar with the details of debug info in the backend and
would appreciate any suggestions on how to address the issue in the best
possible way.

Reviewers: vsk, aprantl, jpaquette, efriedma, paquette

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D83561
2020-07-17 10:27:43 +01:00
Kai Luo 817767abee [PowerPC] Precommit test case for PR46759. NFC. 2020-07-17 08:41:15 +00:00
Simon Wallis 3e0ccf9a90 [ARM] halfword store hits llvm_unreachable with big-endian
Summary:
[ARM] halfword store hits llvm_unreachable with big-endian

Provide missing case in getFixupKindContainerSizeBytes().

This stops execution reaching llvm_unreachable("Unknown fixup kind!")

D83947

Reviewers: olista01, ostannard

Reviewed By: ostannard

Subscribers: ostannard, kristof.beyls, hiraditya, danielkiss, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83947

Change-Id: I598aa1fb51fd1c6f424c557c85d6df6d1958bc62
2020-07-17 08:56:44 +01:00
hsmahesha 4905536086 Revert "[AMDGPU/MemOpsCluster] Implement new heuristic for computing max mem ops cluster size"
This reverts commit cc9d693856.
2020-07-17 12:20:37 +05:30
Craig Topper 6bba95831e [X86] Change the scheduler model for 'pentium4' to SandyBridgeModel.
I meant to do this in D83913, but missed it while updating the
feature list.

Interestingly I think this is disabling the postRA scheduler. But
it does match our default 64-bit behavior.

Reviewed By: echristo

Differential Revision: https://reviews.llvm.org/D83996
2020-07-16 22:04:29 -07:00
Carl Ritson 3a18665748 [AMDGPU] Translate s_and/s_andn2 to s_mov in vcc optimisation
When SCC is dead, but VCC is required then replace s_and / s_andn2
with s_mov into VCC when mask value is 0 or -1.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D83850
2020-07-17 11:48:57 +09:00
Wouter van Oortmerssen cc1b9b680f [WebAssembly] 64-bit (function) pointer fixes.
Accounting for the fact that Wasm function indices are 32-bit, but in wasm64 we want uniform 64-bit pointers.
Includes reloc types for 64-bit table indices.

Differential Revision: https://reviews.llvm.org/D83729
2020-07-16 14:10:22 -07:00
Craig Topper 5408024fa8 [X86] Move integer hadd/hsub formation into a helper function shared by combineAdd and combineSub.
There was a lot of duplicate code here for checking the VT and
subtarget. Moving it into a helper avoids that.

It also fixes a bug that combineAdd reused Op0/Op1 after a call
to isHorizontalBinOp may have changed it. The new helper function
has its own local version of Op0/Op1 that aren't shared by other
code.

Fixes PR46455.

Reviewed By: spatel, bkramer

Differential Revision: https://reviews.llvm.org/D83971
2020-07-16 13:27:27 -07:00
Matt Arsenault 6c5b635e95 AMDGPU: Add a few more missing test for AGPR tuple copying 2020-07-16 15:53:11 -04:00
Craig Topper ad171d24b9 [X86] Change the tuning settings for pentium4 to be more modern since its the default 32-bit cpu in clang
Alternative to D83897. I believe the big change here is that I removed slow unaligned memory 16

Down side that it may adversely effect tuning if someone explicitly targets -march=pentium4 and expects pentium4 tuned code. Of course pentium4 is so old our default behavior with the previous settings may not have been the best either.

Reviewed By: echristo, RKSimon

Differential Revision: https://reviews.llvm.org/D83913
2020-07-16 12:51:25 -07:00
Matt Arsenault 10382285ac AMDGPU: Add missing tests for copyPhysReg AGPR tuples 2020-07-16 15:27:57 -04:00
Thomas Lively ecb2e5bcd7 [WebAssembly] Implement v128.select
Although the SIMD spec proposal does not specifically include a
select instruction, the select instruction in MVP WebAssembly is
polymorphic over the selected types, so it is able to work on v128
values when they are enabled. This patch introduces a new variant of
the select instruction for each legal vector type. Additional ISel
patterns are adapted from the SELECT_I32 and SELECT_I64 patterns.

Depends on D83736.

Differential Revision: https://reviews.llvm.org/D83737
2020-07-16 11:37:25 -07:00
Thomas Lively f7868f87ac [WebAssembly] Autogenerate tests for simd-select.ll
Updating the simd-select.ll tests manually with consistent named
regexps for the register numbers was taking more time than it was
worth, so this patch updates that test file to have autogenerated
output. This is not a significant readability regression because the
tests in that file are all very small.

Depends on D83734.

Differential Revision: https://reviews.llvm.org/D83736
2020-07-16 11:19:09 -07:00
Thomas Lively f0f9787646 [WebAssembly] Lower vselect to v128.bitselect
We were previously expanding vselect and matching on the expansion to
generate bitselects, but in some cases the expansion would be further
combined and a bitselect would not get generated. This patch improves
codegen in those cases by legalizing vselect and lowering it to
v128.bitselect. The old pattern that matches the expansion is still
useful for lowering IR that already uses the expansion rather than a
select operation.

Differential Revision: https://reviews.llvm.org/D83734
2020-07-16 11:11:19 -07:00
Craig Topper 9adf7461f7 [X86] Add test case for PR46455. 2020-07-16 11:06:55 -07:00
Matt Arsenault 9d3e56e2ee DAG: Try scalarizing when expanding saturating add/sub
In an upcoming AMDGPU patch, the scalar cases will be legal and vector
ops should be scalarized, rather than producing a long sequence of
vector ops which will also need to be scalarized.

Use a lazy heuristic that seems to work and improves the thumb2 MVE
test.
2020-07-16 14:05:16 -04:00
Denis Antrushin ef658ebd62 MIR Statepoint refactoring. Part 1: Basic MI level changes.
Basic support for variadic-def MIR Statepoint:
- Change TableGen STATEPOINT description to variadic out list
  (For self-documentation purpose; by itself it does not affect
  code generation in any way).
- Update StatepointOpers helper class to handle variadic defs.
- Update MachineVerifier to properly handle them, too.

With this change, new Statepoint instruction can be passed through
backend (excluding ISEL) without errors.

Full change set is available at D81603.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D81645
2020-07-17 00:57:21 +07:00
Matt Arsenault 79f67cae91 AMDGPU: Rename add/sub with carry out instructions
The hardware has created a real mess in the naming for add/sub, which
have been renamed basically every generation. Switch the carry out
pseudos to have the gfx9/gfx10 names. We were using the original SI/CI
v_add_i32/v_sub_i32 names. Later targets reintroduced these names as
carryless instructions with a saturating clamp bit, which we do not
define. Do this rename so we can unambiguously add these missing
instructions.

The carry-in versions should also be renamed, but at least those had a
consistent _u32 name to begin with. The 16-bit instructions were also
renamed, but aren't ambiguous.

This does regress assembler error message quality in some cases. In
mismatched wave32/wave64 situations, this will switch from
"unsupported instruction" to "invalid operand", with the error
pointing at the wrong position. I couldn't quite follow how the
assembler selects these, but the previous behavior seemed accidental
to me. It looked like there was a partial attempt to handle this which
was never completed (i.e. there is an AMDGPUOperand::isBoolReg but it
isn't used for anything).
2020-07-16 13:16:30 -04:00
Petar Avramovic 6850033ca6 AMDGPU/GlobalISel: Legalize s64->s16 G_SITOFP/G_UITOFP
Add widenScalar for TypeIdx == 0 for G_SITOFP/G_UITOFP.
Legailize, using widenScalar, as s64->s32 G_SITOFP/G_UITOFP
followed by s32->s16 G_FPTRUNC.

Differential Revision: https://reviews.llvm.org/D83880
2020-07-16 16:31:57 +02:00
Jay Foad fc2317f0f5 [PowerPC] Precommit 64-bit funnel shift test cases 2020-07-16 15:20:52 +01:00
James Y Knight 60433c63ac Remove TwoAddressInstructionPass::sink3AddrInstruction.
This function has a bug which will incorrectly reschedule instructions
after an INLINEASM_BR (which can branch). (The bug may also allow
scheduling past a throwing-CALL, I'm not certain.)

I could fix that bug, but, as the removed FIXME notes, it's better to
attempt rescheduling before converting to 3-addr form, as that may
remove the need to convert in the first place. In fact, the code to do
such reordering was added to this pass only a few months later, in
2011, via the addition of the function rescheduleMIBelowKill. That
code does not contain the same bug.

The removal of the sink3AddrInstruction function is not a no-op: in
some cases it would move an instruction post-conversion, when
rescheduleMIBelowKill would not move the instruction pre-converison.
However, this does not appear to be important: the machine instruction
scheduler can reorder the after-conversion instructions, in any case.

This patch fixes a kernel panic 4.4 LTS x86_64 Linux kernels, when
built with clang after 4b0aa5724f.

Link: https://github.com/ClangBuiltLinux/linux/issues/1085

Differential Revision: https://reviews.llvm.org/D83708
2020-07-16 10:02:52 -04:00
Jay Foad 482753fe9c [PowerPC] Use CHECK-LABEL for better diagnostics 2020-07-16 13:41:29 +01:00
Paul Walker 509351d768 [SVE] Add lowering for scalable vector fadd, fdiv, fmul and fsub operations.
Lower the operations to predicated variants.  This is prep work
required for fixed length code generation but also fixes a bug
whereby these operations fail selection when "unpacked" vector
types (e.g. MVT::nxv2f32) are used.

This patch also adds the missing "unpacked" patterns for FMA.

Differential Revision: https://reviews.llvm.org/D83765
2020-07-16 11:31:35 +00:00
Pavel Iliin b9a6fb6428 [ARM] VBIT/VBIF support added.
Vector bitwise selects are matched by pseudo VBSP instruction
and expanded to VBSL/VBIT/VBIF after register allocation
depend on operands registers to minimize extra copies.
2020-07-16 11:25:53 +01:00
David Green 146d35b6ee [ARM] CSEL generation
This adds a peephole optimisation to turn a t2MOVccr that could not be
folded into any other instruction into a CSEL on 8.1-m. The t2MOVccr
would usually be expanded into a conditional mov, that becomes an IT;
MOV pair. We can instead generate a CSEL instruction, which can
potentially be smaller and allows better register allocation freedom,
which can help reduce codesize. Performance is more variable and may
depend on the micrarchitecture details, but initial results look good.
If we need to control this per-cpu, we can add a subtarget feature as we
need it.

Original patch by David Penry.

Differential Revision: https://reviews.llvm.org/D83566
2020-07-16 11:10:53 +01:00
Kerry McLaughlin 2762da0a16 [SVE][CodeGen] Legalisation of masked loads and stores
Summary:
This patch modifies IncrementMemoryAddress to use a vscale
when calculating the new address if the data type is scalable.

Also adds tablegen patterns which match an extract_subvector
of a legal predicate type with zip1/zip2 instructions

Reviewers: sdesmalen, efriedma, david-arm

Reviewed By: efriedma, david-arm

Subscribers: tschuett, hiraditya, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83137
2020-07-16 10:55:45 +01:00
Petar Avramovic 5658002b80 AMDGPU/GlobalISel: Select G_FREEZE
Select G_FREEZE in the same way that COPY is selected.

Differential Revision: https://reviews.llvm.org/D83031
2020-07-16 11:10:48 +02:00
Amy Kwan fc55308628 [PowerPC][Power10] Fix VINS* (vector insert byte/half/word) instructions to have i32 arguments.
Previously, the vins* intrinsic was incorrectly defined to have its second and
third argument arguments as an i64. This patch fixes the second and third
argument of the vins* instruction and intrinsic to have i32s instead.

Differential Revision: https://reviews.llvm.org/D83497
2020-07-16 00:30:24 -05:00
Carl Ritson 5bf2a9dd40 [AMDGPU] Update VMEM scalar write hazard mitigation sequence
Using s_waitcnt_depctr 0xffe3 is potentially faster than v_nop.

Reviewed By: rampitec, foad

Differential Revision: https://reviews.llvm.org/D83872
2020-07-16 11:37:45 +09:00
dfukalov 7520393842 [NFC] Fixed typo in tests parameters
Summary:
llc reports `fp32-denormals` is not recognized. I guess it was intended to be
`-denormal-fp-math-f32={preserve-sign|ieee} -mattr=+mad-mac-f32-insts`

Reviewers: rampitec

Reviewed By: rampitec

Subscribers: jvesely, nhaehnle, llvm-commits, kerbowa

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83883
2020-07-15 22:09:01 +03:00
Hiroshi Yamauchi f233b92f92 [PGO][PGSO] Add profile guided size optimization to LegalizeDAG.
Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83333
2020-07-15 10:03:38 -07:00
lewis-revill c9c955ada8 [RISCV] Add matching of codegen patterns to RISCV Bit Manipulation Zbt asm instructions
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions from the ternary subset (zbt subextension) of the
experimental B extension of RISC-V.
It adds also the correspondent codegen tests.

This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf

Differential Revision: https://reviews.llvm.org/D79875
2020-07-15 12:19:34 +01:00
lewis-revill d4be33374c [RISCV] Add matching of codegen patterns to RISCV Bit Manipulation Zbs asm instructions
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions from the single-bit subset (zbs subextension) of
the experimental B extension of RISC-V.
It adds also the correspondent codegen tests.

This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf

Differential Revision: https://reviews.llvm.org/D79874
2020-07-15 12:19:34 +01:00
lewis-revill 6144f0a1e5 [RISCV] Add matching of codegen patterns to RISCV Bit Manipulation Zbbp asm instructions
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions belonging to both the permutation and the base
subsets of the experimental B extension of RISC-V.
It adds also the correspondent codegen tests.

This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf

Differential Revision: https://reviews.llvm.org/D79873
2020-07-15 12:19:34 +01:00
lewis-revill 31b52b4345 [RISCV] Add matching of codegen patterns to RISCV Bit Manipulation Zbp asm instructions
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions from the permutation subset (zbp subextension) of
the experimental B extension of RISC-V.
It adds also the correspondent codegen tests.

This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf

Differential Revision: https://reviews.llvm.org/D79871
2020-07-15 12:19:34 +01:00
lewis-revill e2692f0ee7 [RISCV] Add matching of codegen patterns to RISCV Bit Manipulation Zbb asm instructions
This patch provides optimization of bit manipulation operations by
enabling the +experimental-b target feature.
It adds matching of single block patterns of instructions to specific
bit-manip instructions from the base subset (zbb subextension) of the
experimental B extension of RISC-V.
It adds also the correspondent codegen tests.

This patch is based on Claire Wolf's proposal for the bit manipulation
extension of RISCV:
https://github.com/riscv/riscv-bitmanip/blob/master/bitmanip-0.92.pdf

Differential Revision: https://reviews.llvm.org/D79870
2020-07-15 12:19:34 +01:00
Roger Ferrer Ibanez 14bc5e149d [DAGCombiner] Rebuild (setcc x, y, ==) from (xor (xor x, y), 1)
The existing code already considered this case. Unfortunately a typo in
the condition prevents it from triggering. Also the existing code, had
it run, forgot to do the folding.

This fixes PR42876.

Differential Revision: https://reviews.llvm.org/D65802
2020-07-15 07:34:22 +00:00
Roger Ferrer Ibanez 2b6215f188 [NFC] Add tests for boolean comparisons
They currently show that the not equal case may be improved.

See PR42876

Differential Revision: https://reviews.llvm.org/D65801
2020-07-15 07:33:43 +00:00
Carl Ritson 674226126d [AMDGPU] Apply pre-emit s_cbranch_vcc optimation to more patterns
Add handling of s_andn2 and mask of 0.
This eliminates redundant instructions from uniform control flow.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D83641
2020-07-15 11:02:35 +09:00
Krzysztof Pszeniczny c3e6555616 Call Frame Information (CFI) Handling for Basic Block Sections
This patch handles CFI with basic block sections, which unlike DebugInfo does
not support ranges. The DWARF standard explicitly requires emitting separate
CFI Frame Descriptor Entries for each contiguous fragment of a function. Thus,
the CFI information for all callee-saved registers (possibly including the
frame pointer, if necessary) have to be emitted along with redefining the
Call Frame Address (CFA), viz. where the current frame starts.

CFI directives are emitted in FDE’s in the object file with a low_pc, high_pc
specification. So, a single FDE must point to a contiguous code region unlike
debug info which has the support for ranges. This is what complicates CFI for
basic block sections.

Now, what happens when we start placing individual basic blocks in unique
sections:

* Basic block sections allow the linker to randomly reorder basic blocks in the
address space such that a given basic block can become non-contiguous with the
original function.
* The different basic block sections can no longer share the cfi_startproc and
cfi_endproc directives. So, each basic block section should emit this
independently.
* Each (cfi_startproc, cfi_endproc) directive will result in a new FDE that
caters to that basic block section.
* Now, this basic block section needs to duplicate the information from the
entry block to compute the CFA as it is an independent entity. It cannot refer
to the FDE of the original function and hence must duplicate all the stuff that
is needed to compute the CFA on its own.
* We are working on a de-duplication patch that can share common information in
FDEs in a CIE (Common Information Entry) and we will present this as a follow up
patch. This can significantly reduce the duplication overhead and is
particularly useful when several basic block sections are created.
* The CFI directives are emitted similarly for registers that are pushed onto
the stack, like callee saved registers in the prologue. There are cfi
directives that emit how to retrieve the value of the register at that point
when the push happened. This has to be duplicated too in a basic block that is
floated as a separate section.

Differential Revision: https://reviews.llvm.org/D79978
2020-07-14 12:54:12 -07:00
Roger Ferrer Ibanez 0cbdd2a82a [RISCV] Fix isStoreToStackSlot
Because of the layout of stores (that don't have a destination operand)
this check is exactly the same as the one in
RISCVInstrInfo::isLoadFromStackSlot.

Differential Revision: https://reviews.llvm.org/D81805
2020-07-14 12:36:42 +00:00
Roger Ferrer Ibanez c1d021e2cc [NFC][RISCV] Test for D81805
New test to show the changes after D81805 is committed.

Differential Revision: https://reviews.llvm.org/D83750
2020-07-14 12:35:31 +00:00
Paul Walker 6e198aae1d [SelectionDAG] Prevent warnings when extracting fixed length vector from scalable.
ComputeNumSignBits and computeKnownBits both trigger "Scalable flag
may be dropped" warnings when a fixed length vector is extracted
from a scalable vector.  This patch assumes nothing about the
demanded elements thus matching the behaviour when extracting a
scalable vector from a scalable vector.

Differential Revision: https://reviews.llvm.org/D83642
2020-07-14 11:12:56 +00:00
Sam Elliott 1d15bbb9d9 Revert "[RISCV] Avoid Splitting MBB in RISCVExpandPseudo"
This reverts commit 97106f9d80.

This is based on feedback from https://reviews.llvm.org/D82988#2147105
2020-07-14 11:15:01 +01:00
Jay Foad 5ab2e14d31 [AMDGPU] Fix typos in performCtlz_CttzCombine()
Fix two obvious errors in the code and also update the test check.
Also add one test to catch the failure.

Patch by Ruiling Song!

Differential Revision: https://reviews.llvm.org/D83280
2020-07-14 10:18:18 +01:00
Sander de Smalen a8f4f85d84 [AArch64][SVE] Remove erroneous assert in resolveFrameOffsetReference
The code already supports addressing a fixed-size stack object from
the frame-pointer, by first subtracting sizeof(SVE area) from FP.

Reviewers: efriedma, cameron.mcinally, david-arm, rengolin

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D83125
2020-07-14 09:22:45 +01:00
Jay Foad 1658b8d7dd [AMDGPU] Avoid using s_cmpk when src0 is not register
The hardware spec require src0 of s_cmpk should be a register. So, we
should not optimize s_cmp to s_cmpk if src0 is not register.

Patch by Ruiling Song!
2020-07-14 09:05:53 +01:00
David Sherwood 02650ac036 [SVE][CodeGen] Add README for SVE-related warnings in tests
I have added a new file:

  llvm/test/CodeGen/AArch64/README

that describes what to do in the event one of the SVE codegen tests
fails the warnings check. In addition, I've added comments to all
the relevant SVE tests pointing users at the README file.

Differential Revision: https://reviews.llvm.org/D83467
2020-07-14 08:31:10 +01:00
David Sherwood 3b8eaf26db [SVE][CodeGen] Fix implicit TypeSize->uint64_t conversion in TransformFPLoadStorePair
In DAGCombiner::TransformFPLoadStorePair we were dropping the scalable
property of TypeSize when trying to create an integer type of equivalent
size. In fact, this optimisation makes no sense for scalable types
since we don't know the size at compile time. I have changed the code
to bail out when encountering scalable type sizes.

I've added a test to

  llvm/test/CodeGen/AArch64/sve-fp.ll

that exercises this code path. The test already emits an error if it
encounters warnings due to implicit TypeSize->uint64_t conversions.

Differential Revision: https://reviews.llvm.org/D83572
2020-07-14 08:07:30 +01:00
Carl Ritson e5f022cad9 [AMDGPU][NFC] Tidy sgpr-control-flow.ll whitespace
Pre-commit clean up for D83641.
2020-07-14 16:03:05 +09:00
Carl Ritson 74c14202d9 [AMDGPU] Propagate dead flag during pre-RA exec mask optimizations
Preserve SCC dead flags in SIOptimizeExecMaskingPreRA.
This helps with removing redundant s_andn2 instructions later.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D83637
2020-07-14 12:53:43 +09:00
Amy Kwan 62f5ba624b [PowerPC][Power10] Implement Test LSB by Byte Builtins in LLVM/Clang
This patch implements builtins for the Test LSB by Byte instruction introduced
in Power10.

Differential Revision: https://reviews.llvm.org/D82431
2020-07-13 22:47:47 -05:00
Amara Emerson 64eb3a4915 [AArch64][GlobalISel] Add post-legalize combine for sext_inreg(trunc(sextload)) -> copy
On AArch64 we generate redundant G_SEXTs or G_SEXT_INREGs because of this.

Differential Revision: https://reviews.llvm.org/D81993
2020-07-13 20:27:45 -07:00
Kai Luo d4e7d126b0 [PowerPC] Generate CFI directives when probing in prologue
Add missing CFI directives when probing in prologue if
`stack-clash-protection` is enabled.

Differential Revision: https://reviews.llvm.org/D83276
2020-07-14 02:56:12 +00:00
Fangrui Song eafe7c14ea [PowerPC] Fix combineVectorShuffle regression after D77448
Commit 1fed131660 assumed that NewShuffle (shuffle vector
canonicalization result) will always be ShuffleVectorSDNode, which may
be false (it may be a BITCAST node):

```
...
t12: v4i32 = scalar_to_vector t2
t15: v16i8 = bitcast t12  # LHS
t17: v16i8 = vector_shuffle<u,u,u,u,u,u,u,u,0,1,2,3,u,u,u,u> t15, undef:v16i8  # SVN
```

Reviewed By: #powerpc, nemanjai

Differential Revision: https://reviews.llvm.org/D83617
2020-07-13 16:57:27 -07:00
Vedant Kumar 528a1c56d9 Check output in test/CodeGen/Generic/MIRStripDebug/no-metadata-present.mir, NFC 2020-07-13 15:15:49 -07:00
Vedant Kumar e51c7fb842 [debugify] Add targeted test for 2fa656c, NFC
https://reviews.llvm.org/D78411 introduced test changes which relied on
the ability to strip debugify metadata even if module-level metadata is
missing. This introduces a more targeted test for that ability.
2020-07-13 14:40:12 -07:00
Matt Arsenault 23ec773d19 GlobalISel: Implement fewerElementsVector for saturating add/sub 2020-07-13 14:46:40 -04:00
Matt Arsenault 6a8c11a11f GlobalISel: Implement widenScalar for saturating add/sub
Add a placeholder legality rule for AMDGPU until the rest of the
actions are handled.
2020-07-13 14:46:40 -04:00
Matt Arsenault c0ee2d7468 AMDGPU/GlobalISel: Add baseline add/sub sat legalization tests 2020-07-13 14:46:40 -04:00
Matt Arsenault 87f8a4f9a2 AMDGPU/GlobalISel: Add tests for 96-bit add/sub/mul
I almost regressed these, so add tests for them.
2020-07-13 14:07:34 -04:00
Hiroshi Yamauchi fb558ccae7 [PGO][PGSO] Add profile guided size optimization to X86ISelDAGToDAG.
Differential Revision: https://reviews.llvm.org/D83331
2020-07-13 10:28:09 -07:00
Hiroshi Yamauchi 153a0b8906 [PGO][PGSO] Add profile guided size optimization to the X86 LEA fixup.
Differential Revision: https://reviews.llvm.org/D83330
2020-07-13 09:46:22 -07:00
Sanjay Patel 8779b11410 [DAGCombiner] rot i16 X, 8 --> bswap X
We have this generic transform in IR (instcombine),
but as shown in PR41098:
http://bugs.llvm.org/PR41098
...the pattern may emerge in codegen too.

x86 has a potential refinement/reversal opportunity here,
but that should come later or needs a target hook to
avoid the transform. Converting to bswap is the more
specific form, so we should use it if it is available.
2020-07-13 12:01:53 -04:00
Sanjay Patel 69fff1fc49 [x86] add tests for bswap/rotate; NFC 2020-07-13 12:01:53 -04:00
Pavel Iliin 8f7d3430b7 [ARM][NFC] More detailed vbsl checks in ARM & Thumb2 tests. 2020-07-13 17:00:43 +01:00
Sanjay Patel 2df46a5743 [DAGCombiner] allow load/store merging if pairs can be rotated into place
This carves out an exception for a pair of consecutive loads that are
reversed from the consecutive order of a pair of stores. All of the
existing profitability/legality checks for the memops remain between
the 2 altered hunks of code.

This should give us the same x86 base-case asm that gcc gets in
PR41098 and PR44895:
http://bugs.llvm.org/PR41098
http://bugs.llvm.org/PR44895

I think we are missing a potential subsequent conversion to use "movbe"
if the target supports that. That might be similar to what AArch64
would use to get "rev16".

Differential Revision: https://reviews.llvm.org/D83567
2020-07-13 08:57:00 -04:00
Sanjay Patel f1bbf3acb4 Revert "[DAGCombiner] allow load/store merging if pairs can be rotated into place"
This reverts commit 591a3af5c7.
The commit message was cut off and failed to include the review citation.
2020-07-13 08:55:29 -04:00
Sanjay Patel 591a3af5c7 [DAGCombiner] allow load/store merging if pairs can be rotated into place
This carves out an exception for a pair of consecutive loads that are
reversed from the consecutive order of a pair of stores. All of the
existing profitability/legality checks for the memops remain between
the 2 altered hunks of code.

This should give us the same x86 base-case asm that gcc gets in
PR41098 and PR44895:i
http://bugs.llvm.org/PR41098
http://bugs.llvm.org/PR44895

I think we are missing a potential subsequent conversion to use "movbe"
if the target supports that. That might be similar to what AArch64
would use to get "rev16".

Differential Revision:
2020-07-13 08:53:06 -04:00
Sjoerd Meijer 595270ae39 [ARM][MVE] Refactor option -disable-mve-tail-predication
This refactors option -disable-mve-tail-predication to take different arguments
so that we have 1 option to control tail-predication rather than several
different ones.

This is also a prep step for D82953, in which we want to reject reductions
unless that is requested with this option.

Differential Revision: https://reviews.llvm.org/D83133
2020-07-13 13:40:33 +01:00
Mirko Brkusanin 38998cfa9c [AMDGPU][GlobalISel] Fix subregister index for EXEC register in selectBallot.
Temporarily remove subregister for EXEC in selectBallot added in
https://reviews.llvm.org/D83214 to fix failures on expensive checks buildbot.
2020-07-13 13:35:34 +02:00
Paul Walker 319a97b5e2 [SVE] Ensure fixed length vector fptrunc operations bigger than NEON are not considered legal.
Differential Revision: https://reviews.llvm.org/D83568
2020-07-13 11:16:30 +00:00
Mirko Brkusanin ce23e54162 [AMDGPU][GlobalISel] Select llvm.amdgcn.ballot
Select ballot intrinsic for GlobalISel.

Differential Revision: https://reviews.llvm.org/D83214
2020-07-13 12:14:43 +02:00
Petar Avramovic fd85b40aee [GlobalISel][InlineAsm] Fix buildCopy for inputs
Check that input size matches size of destination reg class.
Attempt to extend input size when needed.

Differential Revision: https://reviews.llvm.org/D83384
2020-07-13 10:52:33 +02:00
Kai Luo ac8dc526c4 [PowerPC] Enhance tests for D83276. NFC. 2020-07-13 04:37:09 +00:00
Qiu Chaofan b6912c879e [PowerPC] Support constrained conversion in SPE target
This patch adds support for constrained int/fp conversion between
signed/unsigned i32 and f32/f64.

Reviewed By: jhibbits

Differential Revision: https://reviews.llvm.org/D82747
2020-07-13 12:18:36 +08:00
Fangrui Song 4d5fd0ee5e [MC][RISCV] Set UseIntegratedAssembler to true
to align with most other targets. Also, -fintegrated-as is the default
for clang -target riscv*.
2020-07-12 21:04:48 -07:00
Craig Topper f8f007e378 [X86] Consistently use 128 as the PSHUFB/VPPERM index for zero
Bit 7 of the index controls zeroing, the other bits are ignored when bit 7 is set. Shuffle lowering was using 128 and shuffle combining was using 255. Seems like we should be consistent.

This patch changes shuffle combining to use 128 to match lowering.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D83587
2020-07-12 10:52:43 -07:00
Yonghong Song 152a9fef1b BPF: permit .maps section variables with typedef type
Currently, llvm when see a global variable in .maps section,
it ensures its type must be a struct type. Then pointee
will be further evaluated for the structure members.
In normal cases, the pointee type will be skipped.

Although this is what current all bpf programs are doing,
but it is a little bit restrictive. For example, it is legitimate
for users to have:
typedef struct { int key_size; int value_size; } __map_t;
__map_t map __attribute__((section(".maps")));

This patch lifts this restriction and typedef of
a struct type is also allowed for .maps section variables.
To avoid create unnecessary fixup entries when traversal
started with typedef/struct type, the new implementation
first traverse all map struct members and then traverse
the typedef/struct type. This way, in internal BTFDebug
implementation, no fixup entries are generated.

Two new unit tests are added for typedef and const
struct in .maps section. Also tested with kernel bpf selftests.

Differential Revision: https://reviews.llvm.org/D83638
2020-07-12 09:42:25 -07:00
Sanjay Patel 39009a8245 [DAGCombiner] tighten fast-math constraints for fma fold
fadd (fma A, B, (fmul C, D)), E --> fma A, B, (fma C, D, E)

This is only allowed when "reassoc" is present on the fadd.

As discussed in D80801, this transform goes beyond
what is allowed by "contract" FMF (-ffp-contract=fast).
That is because we are fusing the trailing add of 'E' with a
multiply, but without "reassoc", the code mandates that the
products A*B and C*D are added together before adding in 'E'.

I've added this example to the LangRef to try to clarify the
meaning of "contract". If that seems reasonable, we should
probably do something similar for the clang docs because
there does not appear to be any formal spec for the behavior
of -ffp-contract=fast.

Differential Revision: https://reviews.llvm.org/D82499
2020-07-12 08:51:49 -04:00
Craig Topper 47872adf6a [X86] Add test cases for missed opportunities to use vpternlog due to a bitcast between the logic ops.
These test cases fail to use vpternlog because the AND was converted
to a blend shuffle and then converted back to AND during shuffle lowering.
This results in the AND having a different type than it started with.
This prevents our custom matching logic from seeing the two logic ops.
2020-07-11 12:54:52 -07:00
Christudasan Devadasan d7a05698ef [AMDGPU] Move LowerSwitch pass to CodeGenPrepare.
It is possible that LowerSwitch pass leaves certain blocks
unreachable from the entry. If not removed, these dead blocks
can cause undefined behavior in the subsequent passes.
It caused a crash in the AMDGPU backend after the instruction
selection when a PHI node has its incoming values coming from
these unreachable blocks.

In the AMDGPU pass flow, the last invocation of UnreachableBlockElim
precedes where LowerSwitch is currently placed and eventually
missed out on the opportunity to get these blocks eliminated.
This patch ensures that LowerSwitch pass get inserted earlier
to make use of the existing unreachable block elimination pass.

Reviewed By: sameerds, arsenm

Differential Revision: https://reviews.llvm.org/D83584
2020-07-11 16:33:38 +05:30
Wang, Pengfei e628092524 [X86][MMX] Optimize MMX shift intrinsics.
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D83534
2020-07-11 11:16:23 +08:00
Jinsong Ji 3e3acc1cc7 [PowerPC][MachinePipeliner] Enable pipeliner if hasInstrSchedModel
P9 is the only one with InstrSchedModel, but we may have more in the
future, we should not hardcoded it to P9, check hasInstrSchedModel
instead.

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D83590
2020-07-11 02:24:12 +00:00
Ben Shi 28acaf8423 [RISCV][test] Add a test for (mul (add x, c1), c2) -> (add (mul x, c2), c1*c2) transformation
Reviewed By: lenary, MaskRay

Differential Revision: https://reviews.llvm.org/D83159
2020-07-10 18:33:12 -07:00
Thomas Lively b59c6fcaf3 [WebAssembly] Prefer v128.const for constant splats
In BUILD_VECTOR lowering, we used to generally prefer using splats
over v128.const instructions because v128.const has a very large
encoding. However, in d5b7a4e2e8 we switched to preferring consts
because they are expected to be more efficient in engines. This patch
updates the ISel patterns to match this current preference.

Differential Revision: https://reviews.llvm.org/D83581
2020-07-10 18:27:52 -07:00
Matt Arsenault 31f4e43f3f AMDGPU: Remove .value_type from kernel metadata
This doesn't appear used for anything, and is emitted incorrectly
based on the description. This also depends on the IR type, and
pointee element type.
2020-07-10 18:16:31 -04:00
Craig Topper 122a45fbac [X86] Add isel patterns for matching broadcast vpternlog if the ternlog and the broadcast have different types. 2020-07-10 15:15:02 -07:00
Lei Huang 90b1a710ae [PowerPC] Enable default support of quad precision operations
Summary: Remove option guarding support of quad precision operations.

Reviewers: nemanjai, #powerpc, steven.zhang

Reviewed By: nemanjai, #powerpc, steven.zhang

Subscribers: qiucf, wuzish, nemanjai, hiraditya, kbarton, shchenz, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83437
2020-07-10 13:27:48 -05:00
Luke Geeson 954db63cd1 [ARM] Add Cortex-A78 and Cortex-X1 Support for Clang and LLVM
This patch upstreams support for the Arm-v8 Cortex-A78 and Cortex-X1
processors for AArch64 and ARM.

In detail:
- Adding cortex-a78 and cortex-x1 as cpu options for aarch64 and arm targets in clang
- Adding Cortex-A78 and Cortex-X1 CPU names and ProcessorModels in llvm

details of the CPU can be found here:
https://www.arm.com/products/cortex-x

https://www.arm.com/products/silicon-ip-cpu/cortex-a/cortex-a78

The following people contributed to this patch:
- Luke Geeson
- Mikhail Maltsev

Reviewers: t.p.northover, dmgreen

Reviewed By: dmgreen

Subscribers: dmgreen, kristof.beyls, hiraditya, danielkiss, cfe-commits,
llvm-commits, miyuki

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D83206
2020-07-10 18:24:11 +01:00
Kang Zhang e5123ea248 [NFC][PowerPC] Add a new MIR file to test mi-peephole pass 2020-07-10 16:08:07 +00:00
Zequan Wu 1fbb719470 [LPM] Port CGProfilePass from NPM to LPM
Reviewers: hans, chandlerc!, asbirlea, nikic

Reviewed By: hans, nikic

Subscribers: steven_wu, dexonsmith, nikic, echristo, void, zhizhouy, cfe-commits, aeubanks, MaskRay, jvesely, nhaehnle, hiraditya, kerbowa, llvm-commits

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D83013
2020-07-10 09:04:51 -07:00
Florian Hahn 864586d0fd [ARM] Pass -verify-machineinstr to test and XFAIL until fixed.
Some bots run with -verify-machineinstr enabled. Add it to the new test
and XFAIL it until fixed.
2020-07-10 16:44:52 +01:00
Florian Hahn eb5c7f6b8f [ARM] Add test with tcreturn and debug value.
In the attached test case, a non-terminator instruction (DBG_VALUE) is
inserted after a terminator, producing an invalid MBB.
2020-07-10 16:32:21 +01:00
Sanjay Patel d84b4e163d [AArch64][x86] add tests for rotated store merge; NFC 2020-07-10 11:28:51 -04:00
Paul Walker f78e6a3095 [SVE] Code generation for fixed length vector truncates.
Lower fixed length vector truncates to a sequence of SVE UZP1 instructions.

Differential Revision: https://reviews.llvm.org/D83395
2020-07-10 10:37:19 +00:00
Mirko Brkusanin cf40db21af [AMDGPU][GlobalISel] Fix G_AMDGPU_TBUFFER_STORE_FORMAT mapping
Add missing mappings and tablegen definitions for TBUFFER_STORE_FORMAT.

Differential Revision: https://reviews.llvm.org/D83240
2020-07-10 11:32:32 +02:00
Simon Pilgrim 77133cc1e2 [X86][AVX] Attempt to fold PACK(SHUFFLE(X,Y),SHUFFLE(X,Y)) -> SHUFFLE(PACK(X,Y)).
Truncations lowered as shuffles of multiple (concatenated) vectors often leave us with lane-crossing shuffles that feed a PACKSS/PACKUS, if both shuffles are fed from the same 2 vector sources, then we can PACK the sources directly and shuffle the result instead.

This is currently limited to whole i128 lanes in a 256-bit vector, but we can extend this if the need arises (but I'm not seeing many examples in real world code).
2020-07-10 09:33:27 +01:00
Thomas Lively 043eaa9a4a [WebAssembly][NFC] Simplify vector shift lowering and add tests
This patch builds on 0d7286a652 by simplifying the code for detecting
splat values and adding new tests demonstrating the lowering of
splatted absolute value shift amounts, which are common in code
generated by Halide. The lowering is very bad right now, but
subsequent patches will improve it considerably. The tests will be
useful for evaluating the improvements in those patches.

Reviewed By: aheejin

Differential Revision: https://reviews.llvm.org/D83493
2020-07-10 00:18:59 -07:00
Amara Emerson ce22527c0c [AArch64][GlobalISel] Add more specific debug info tests for 613f12dd8e.
As requested, these tests check for specific debug locs on the output of the
legalizer. The only one that I couldn't write was for moreElementsVector, which
AFAICT we don't trigger on AArch64.
2020-07-09 17:13:16 -07:00
Eli Friedman 56ae2cebcd [AArch64][SVE] Add lowering for llvm.fma.
This is currently bare-bones; we aren't taking advantage of any of the
FMA variant instructions.  But it's enough to at least generate
code.

Differential Revision: https://reviews.llvm.org/D83444
2020-07-09 16:12:41 -07:00
Fangrui Song c025bdf25a Revert D83013 "[LPM] Port CGProfilePass from NPM to LPM"
This reverts commit c92a8c0a0f.

It breaks builds and has unaddressed review comments.
2020-07-09 13:34:04 -07:00
Zequan Wu c92a8c0a0f [LPM] Port CGProfilePass from NPM to LPM
Reviewers: hans, chandlerc!, asbirlea, nikic

Reviewed By: hans, nikic

Subscribers: steven_wu, dexonsmith, nikic, echristo, void, zhizhouy, cfe-commits, aeubanks, MaskRay, jvesely, nhaehnle, hiraditya, kerbowa, llvm-commits

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D83013
2020-07-09 13:03:42 -07:00
Puyan Lotfi 7e169cec74 [NFC][test] Adding fastcc test case for promoted 16-bit integer bitcasts.
The following: https://reviews.llvm.org/D82552

fixed an assert in the SelectionDag ISel legalizer for some CCs on armv7.

I noticed that this fix also fixes the assert when using fastcc, so I am
adding a fastcc regression test here.

Differential Revision: https://reviews.llvm.org/D82443
2020-07-09 11:38:49 -07:00
Craig Topper 918e653186 [X86] Immediately call LowerShift from lowerBuildVectorToBitOp.
If we don't immediately lower the vector shift, the splat
constant vector we created may get turned into a constant pool
load before we get around to lowering the shift. This makes it
a lot more difficult to create a shift by constant. Sometimes we
fail to see through the constant pool at all and end up trying
to lower as if it was a variable shift. This requires custom
handling and may create an unsupported vselect on pre-sse-4.1
targets. Since we're after LegalizeVectorOps we are unable to
legalize the unsupported vselect as that code is in LegalizeVectorOps
rather than LegalizeDAG.

So calling LowerShift immediately ensures that we get see the
splat constant.

Fixes PR46527.

Differential Revision: https://reviews.llvm.org/D83455
2020-07-09 10:51:29 -07:00
Hiroshi Yamauchi 2c1a9006dd [PGO][PGSO] Add profile guided size optimization to X86 ISel Lowering. 2020-07-09 10:43:45 -07:00
Hiroshi Yamauchi 06fc125d8c [PGO][PGSO] Add profile guided size optimization tests to X86 ISel Lowering. 2020-07-09 09:56:01 -07:00
Matt Arsenault fdde69aac9 AMDGPU/GlobalISel: Work around verifier error in test
The unfortunate split between finalizeLowering and the selector pass
means there's a point where the verifier fails. The DAG selector pass
skips the verifier, but this seems to not work when using the
GlobalISel fallback.
2020-07-09 10:24:16 -04:00
Simon Pilgrim f54402b63a [X86][AVX] Attempt to fold extract_subvector(shuffle(X)) -> extract_subvector(X)
If we're extracting a subvector from a shuffle that is shuffling entire subvectors we can peek through and extract the subvector from the shuffle source instead.

This helps remove some cases where concat_vectors(extract_subvector(),extract_subvector()) legalizations has resulted in BLEND/VPERM2F128 shuffles of the subvectors.
2020-07-09 14:09:24 +01:00
Sam Elliott 97106f9d80 [RISCV] Avoid Splitting MBB in RISCVExpandPseudo
Since the `RISCVExpandPseudo` pass has been split from
`RISCVExpandAtomicPseudo` pass, it would be nice to run the former as
early as possible (The latter has to be run as late as possible to
ensure correctness). Running earlier means we can reschedule these pairs
as we see fit.

Running earlier in the machine pass pipeline is good, but would mean
teaching many more passes about `hasLabelMustBeEmitted`. Splitting the
basic blocks also pessimises possible optimisations because some
optimisations are MBB-local, and others are disabled if the block has
its address taken (which is notionally what `hasLabelMustBeEmitted`
means).

This patch uses a new approach of setting the pre-instruction symbol on
the AUIPC instruction to a temporary symbol and referencing that. This
avoids splitting the basic block, but allows us to reference exactly the
instruction that we need to. Notionally, this approach seems more
correct because we do actually want to address a specific instruction.

This then allows the pass to be moved much earlier in the pass pipeline,
before both scheduling and register allocation. However, to do so we
must leave the MIR in SSA form (by not redefining registers), and so use
a virtual register for the intermediate value. By using this virtual
register, this pass now has to come before register allocation.

Reviewed By: luismarques, asb

Differential Revision: https://reviews.llvm.org/D82988
2020-07-09 13:54:13 +01:00
Paul Walker 614fb09645 [SVE] Disable some BUILD_VECTOR related code generator features.
Fixed length vector code generation for SVE does not yet custom
lower BUILD_VECTOR and instead relies on expansion.  At the same
time custom lowering for VECTOR_SHUFFLE is also not available so
this patch updates isShuffleMaskLegal to reject vector types that
require SVE.

Related to this it also prevents the merging of stores after
legalisation because this only works when BUILD_VECTOR is either
legal or can be elminated.  When this is not the case the code
generator enters an infinite legalisation loop.

Differential Revision: https://reviews.llvm.org/D83408
2020-07-09 10:47:04 +00:00
Lucas Prates fc39a9ca0e [CodeGen] Matching promoted type for 16-bit integer bitcasts from fp16 operand
Summary:
When legalizing a biscast operation from an fp16 operand to an i16 on a
target that requires both input and output types to be promoted to
32-bits, an assertion can fail when building the new node due to a
mismatch between the the operation's result size and the type specified to
the node.

This patches fix the issue by making sure the bit width of the types
match for the FP_TO_FP16 node, covering the difference with an extra
ANYEXTEND operation.

Reviewers: ostannard, efriedma, pirama, jmolloy, plotfi

Reviewed By: efriedma

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82552
2020-07-09 09:46:17 +01:00
Kai Luo e2b93185b8 [PowerPC] Only make copies of registers on stack in variadic function when va_start is called
On PPC64, for a variadic function, if va_start is not called, it won't
access any variadic argument on stack, thus we can save stores of
registers used to pass arguments.

Differential Revision: https://reviews.llvm.org/D82361
2020-07-09 07:18:17 +00:00
Matt Arsenault 74a148ad39 GlobalISel: Verify G_BITCAST changes the type
Updated the AArch64 tests the best I could with my vague, inferred
understanding of AArch64 register banks. As far as I can tell, there
is only one 32-bit/64-bit type which will use the gpr register bank,
so we have to use the fpr bank for the other operand.
2020-07-08 17:16:27 -04:00
Jay Foad 47788b97a9 SILoadStoreOptimizer: add support for GFX10 image instructions
GFX10 image instructions use one or more address operands starting at
vaddr0, instead of a single vaddr operand, to allow for NSA forms.

Differential Revision: https://reviews.llvm.org/D81675
2020-07-08 19:15:46 +01:00
Jay Foad a8816ebee0 [AMDGPU] Fix and simplify AMDGPULegalizerInfo::legalizeUDIV_UREM32Impl
Use the algorithm from AMDGPUCodeGenPrepare::expandDivRem32.

Differential Revision: https://reviews.llvm.org/D83383
2020-07-08 19:14:49 +01:00
Jay Foad ecac951be9 [AMDGPU] Fix and simplify AMDGPUTargetLowering::LowerUDIVREM
Use the algorithm from AMDGPUCodeGenPrepare::expandDivRem32.

Differential Revision: https://reviews.llvm.org/D83382
2020-07-08 19:14:49 +01:00
Jay Foad f4bd01c191 [AMDGPU] Fix and simplify AMDGPUCodeGenPrepare::expandDivRem32
Fix the division/remainder algorithm by adding a second quotient
refinement step, which is required in some cases like
0xFFFFFFFFu / 0x11111111u (https://bugs.llvm.org/show_bug.cgi?id=46212).

Also document, rewrite and simplify it by ensuring that we always have a
lower bound on inv(y), which simplifies the UNR step and the quotient
refinement steps.

Differential Revision: https://reviews.llvm.org/D83381
2020-07-08 19:14:48 +01:00
Paul Walker bb35f0fd89 [SelectionDAG] Fix incorrect offset when expanding CONCAT_VECTORS.
ExpandVectorBuildThroughStack is also used for CONCAT_VECTORS.
However, when calculating the offsets for each of the operands we
incorrectly use the element size rather than actual size and thus
the stores overlap.

Differential Revision: https://reviews.llvm.org/D83303
2020-07-08 15:39:25 +00:00
Ties Stuij 26a22478cd [CodeGen] Don't combine extract + concat vectors with non-legal types
Summary:
The following combine currently breaks in the DAGCombiner:

```
extract_vector_elt (concat_vectors v4i16:a, v4i16:b), x
   -> extract_vector_elt a, x
```

This happens because after we have combined these nodes we have inserted nodes
that use individual instances of the vector element type. In the above example
i16. However this isn't a legal type on all backends, and when the combining pass calls
the legalizer it breaks as it expects types to already be legal. The type legalizer has
already been run, and running it again would make a mess of the nodes.

In the example code at least, the generated code is still efficient after the change.

Reviewers: miyuki, arsenm, dmgreen, lebedev.ri

Reviewed By: miyuki, lebedev.ri

Subscribers: lebedev.ri, wdng, hiraditya, steven.zhang, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83231
2020-07-08 15:29:57 +01:00
Sanjay Patel 9114900287 [x86] improve codegen for non-splat bit-masked vector compare and select (PR46531)
vselect ((X & Pow2C) == 0), LHS, RHS --> vselect ((shl X, C') < 0), RHS, LHS

Follow-up to D83073 - the non-splat mask cases where we actually see an
improvement are quite limited from what I can tell. AVX1 needs multiply
and blend capabilities and AVX2 needs vector shift and blend capabilities.
The intersection of those 2 constraints is only vectors with 32-bit or
64-bit elements.

XOP is/was better.

Differential Revision: https://reviews.llvm.org/D83181
2020-07-08 08:20:49 -04:00
Simon Pilgrim 9dc250db9d [X86][AVX] SimplifyDemandedVectorEltsForTargetShuffle - ensure mask is same size as constant size
Fixes test regression reported on D81791
2020-07-08 11:47:59 +01:00
Petar Avramovic 419c92a749 [GlobalISel][InlineAsm] Fix matching input constraints to mem operand
Mark matching input constraint to mem operand as not supported.

Differential Revision: https://reviews.llvm.org/D83235
2020-07-08 12:32:17 +02:00
Simon Pilgrim 75f9aa6ce0 [X86][AVX] Add SimplifyDemandedVectorEltsForTargetShuffle test for v32i8->v16i8 PSHUFB
On SKX targets we end up loading a v16i8 PSHUFB mask from a v32i8 constant and scaling incorrectly indexes the demanded elts mask - we're missing a check that the constant pool is the same size as the loaded mask.

Test case from D81791 post-commit review.
2020-07-08 11:26:33 +01:00
Paul Walker fb75451775 [SVE] Custom ISel for fixed length extract/insert_subvector.
We use extact_subvector and insert_subvector to "cast" between
fixed length and scalable vectors.  This patch adds custom c++
based ISel for the following cases:

  fixed_vector = ISD::EXTRACT_SUBVECTOR scalable_vector, 0
  scalable_vector = ISD::INSERT_SUBVECTOR undef(scalable_vector), fixed_vector, 0

Which result in either EXTRACT_SUBREG/INSERT_SUBREG for NEON sized
vectors or COPY_TO_REGCLASS otherwise.

Differential Revision: https://reviews.llvm.org/D82871
2020-07-08 09:49:28 +00:00
David Sherwood 15aeb805dc [CodeGen] Fix warnings in sve-ld1-addressing-mode-reg-imm.ll
For the GetElementPtr case in function
  AddressingModeMatcher::matchOperationAddr
I've changed the code to use the TypeSize class instead of relying
upon the implicit conversion to a uint64_t. As part of this we now
check for scalable types and if we encounter one just bail out for
now as the subsequent optimisations doesn't currently support them.

This changes fixes up all warnings in the following tests:

  llvm/test/CodeGen/AArch64/sve-ld1-addressing-mode-reg-imm.ll
  llvm/test/CodeGen/AArch64/sve-st1-addressing-mode-reg-imm.ll

Differential Revision: https://reviews.llvm.org/D83124
2020-07-08 09:16:00 +01:00
Heejin Ahn 7e6793aa33 [WebAssembly] Generate unreachable after __stack_chk_fail
`__stack_chk_fail` does not return, but `unreachable` was not generated
following `call __stack_chk_fail`. This had a possibility to generate an
invalid binary for functions with a return type, because
`__stack_chk_fail`'s return type is void and `call __stack_chk_fail` can
be the last instruction in the function whose return type is non-void.
Generating `unreachable` after it makes sure CFGStackify's
`fixEndsAtEndOfFunction` handles it correctly.

Reviewed By: tlively

Differential Revision: https://reviews.llvm.org/D83277
2020-07-08 01:02:05 -07:00
Ben Shi 1e9d0811c9 [RISCV] optimize addition with a pair of (addi imm)
For an addition with an immediate in specific ranges, a pair of
addi-addi can be generated instead of the ordinary lui-addi-add serial.

Reviewed By: MaskRay, luismarques

Differential Revision: https://reviews.llvm.org/D82262
2020-07-07 18:57:28 -07:00
Ben Shi cb82de2960 [RISCV] Optimize multiplication by constant
... to shift/add or shift/sub.

Do not enable it on riscv32 with the M extension where decomposeMulByConstant
may not be an optimization.

Reviewed By: luismarques, MaskRay

Differential Revision: https://reviews.llvm.org/D82660
2020-07-07 18:50:24 -07:00
Matt Arsenault 23157f3bdb GlobalISel: Handle EVT argument lowering correctly
handleAssignments was assuming every argument type is an MVT, and
assignArg would always fail. This fixes one of the hacks in the
current AMDGPU calling convention code that pre-processes the
arguments.
2020-07-07 16:36:14 -04:00
Matt Arsenault 42bb481442 AMDGPU/GlobalISel: Fix skipping unused kernel arguments
The tests in a5b9ad7e9a actually failed
the verifier, which for some reason is not the default. Also add tests
for 0-sized function arguments, which do not add entries to the
expected register lists.
2020-07-07 16:36:13 -04:00
Philip Reames b172cd7812 [Statepoint] Factor out logic for non-stack non-vreg lowering [almost NFC]
This is inspired by D81648.  The basic idea is to have the set of SDValues which are lowered as either constants or direct frame references explicit in one place, and to separate them clearly from the spilling logic.

This is not NFC in that the handling of constants larger than > 64 bit has changed.  The old lowering would crash on values which could not be encoded as a sign extended 64 bit value.  The new lowering just spills all constants > 64 bits.  We could be consistent about doing the sext(Con64) optimization, but I happen to know that this code path is utterly unexercised in practice, so simple is better for now.
2020-07-07 13:34:28 -07:00
Zola Bridges 9d9e499840 [x86][seses] Add clang flag; Use lvi-cfi with seses
This patch creates a clang flag to enable SESES. This flag also ensures that
lvi-cfi is on when using seses via clang.

SESES should use lvi-cfi to mitigate returns and indirect branches.

The flag to enable the SESES functionality only without lvi-cfi is now
-x86-seses-enable-without-lvi-cfi to warn users part of the mitigation is not
enabled if they use this flag. This is useful in case folks want to see the
cost of SESES separate from the LVI-CFI.

Reviewed By: sconstab

Differential Revision: https://reviews.llvm.org/D79910
2020-07-07 13:20:13 -07:00
Simon Pilgrim 931ec74f7a [X86][AVX] Don't fold PEXTR(VBROADCAST_LOAD(X)) -> LOAD(X).
We were checking the VBROADCAST_LOAD element size against the extraction destination size instead of the extracted vector element size - PEXTRW/PEXTB have implicit zext'ing so have i32 destination sizes for v8i16/v16i8 vectors, resulting in us extracting from the wrong part of a load.

This patch bails from the fold if the vector element sizes don't match, and we now use the target constant extraction code later on like the pre-AVX2 targets, fixing the test case.

Found by internal fuzzing tests.
2020-07-07 19:10:03 +01:00
Zola Bridges dfabffb195 [x86][lvi][seses] Use SESES at O0 for LVI mitigation
Use SESES as the fallback at O0 where the optimized LVI pass isn't desired due
to its effect on build times at O0.

I updated the LVI tests since this changes the code gen for the tests touched in the parent revision.

This is a follow up to the comments I made here: https://reviews.llvm.org/D80964

Hopefully we can continue the discussion here.

Also updated SESES to handle LFENCE instructions properly instead of adding
redundant LFENCEs. In particular, 1) no longer add LFENCE if the current
instruction being processed is an LFENCE and 2) no longer add LFENCE if the
instruction right before the instruction being processed is an LFENCE

Reviewed By: sconstab

Differential Revision: https://reviews.llvm.org/D82037
2020-07-07 11:05:09 -07:00
Thomas Lively 0d7286a652 [WebAssembly] Avoid scalarizing vector shifts in more cases
Since WebAssembly's vector shift instructions take a scalar shift
amount rather than a vector shift amount, we have to check in ISel
that the vector shift amount is a splat. Previously, we were checking
explicitly for splat BUILD_VECTOR nodes, but this change uses the
standard utilities for detecting splat values that can handle more
complex splat patterns. Since the C++ ISel lowering is now more
general than the ISel patterns, this change also simplifies shift
lowering by using the C++ lowering for all SIMD shifts rather than
mixing C++ and normal pattern-based lowering.

This change improves ISel for shifts to the point that the
simd-shift-unroll.ll regression test no longer tests the code path it
was originally meant to test. The bug corresponding to that regression
test is no longer reproducible with its original reported reproducer,
so rather than try to fix the regression test, this change just
removes it.

Differential Revision: https://reviews.llvm.org/D83278
2020-07-07 10:45:26 -07:00
Simon Pilgrim 6cff71e92e [X86][AVX] Add test case showing incorrect extraction from VBROADCAST_LOAD on AVX2 targets
On AVX2 we tend to lower BUILD_VECTOR of constants as broadcasts if we can, in this case a <2 x i16> non-uniform constant has been lowered as a <4 x i32> broadcast.

The test case shows that the extraction folding code has incorrectly extracted the wrong part (lower WORD) of the resulting i32 memory source.

Found by internal fuzzing tests.
2020-07-07 18:32:32 +01:00
Simon Pilgrim 3030e6b94b [X86][AVX] Add AVX2 tests to extractelement-load.ll 2020-07-07 18:32:32 +01:00
Biplob Mishra 62ba48b45f [PowerPC] Implement Vector Replace Builtins in LLVM
Provide the LLVM intrinsics needed to implement vector replace element
builtins in altivec.h which will be added in a subsequent patch.

Differential Revision: https://reviews.llvm.org/D83308
2020-07-07 12:22:52 -05:00
Sanjay Patel 642eed3713 [x86] fix miscompile in buildvector v16i8 lowering
In the test based on PR46586:
https://bugs.llvm.org/show_bug.cgi?id=46586
...we are inserting 16-bits into the high element of the vector, shuffling it
to element 0, and extracting 32-bits. But xmm1 was never initialized, so the
top 16-bits of the extract are undef without this patch.

(It seems like we could do better than this by recognizing that we only demand
a subsection of the build vector, but I want to make sure we fix the
miscompile 1st.)

This path is only used for pre-SSE4.1, and simpler patterns get squashed
somewhere along the way, so the test still includes a 'urem' as it did in the
original test from the bug report.

Differential Revision: https://reviews.llvm.org/D83319
2020-07-07 13:02:31 -04:00
Sanjay Patel 1c956a3eb9 [x86] add test for buildvector lowering miscompile (PR46586); NFC 2020-07-07 12:24:56 -04:00
Liu, Chen3 ea85ff82c8 [X86] Fix a bug that when lowering byval argument
When an argument has 'byval' attribute and should be
passed on the stack according calling convention,
a stack copy would be emitted twice. This will cause
the real value will be put into stack where the pointer
should be passed.

Differential Revision: https://reviews.llvm.org/D83175
2020-07-07 21:49:31 +08:00
Kerry McLaughlin cdf2eef613 [SVE][CodeGen] Legalisation of unpredicated store instructions
Summary:
When splitting a store of a scalable type, the new address is
calculated in SplitVecOp_STORE using a vscale and an add instruction.

Reviewers: sdesmalen, efriedma, david-arm

Reviewed By: david-arm

Subscribers: tschuett, hiraditya, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83041
2020-07-07 11:47:10 +01:00
Kerry McLaughlin 5e8084beba [SVE][CodeGen] Legalisation of unpredicated load instructions
Summary:
When splitting a load of a scalable type, the new address is
calculated in SplitVecRes_LOAD using a vscale and an add instruction.

This patch also adds a DAG combiner fold to visitADD for vscale:
 - Fold (add (vscale(C0)), (vscale(C1))) to (add (vscale(C0 + C1)))

Reviewers: sdesmalen, efriedma, david-arm

Reviewed By: david-arm

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82792
2020-07-07 11:05:03 +01:00
David Sherwood 9a1a7d888b [SVE] Add more warnings checks to clang and LLVM SVE tests
There are now more SVE tests in LLVM and Clang that do not
emit warnings related to invalid use of EVT::getVectorNumElements()
and VectorType::getNumElements(). For these tests I have added
additional checks that there are no warnings in order to prevent
any future regressions.

Differential Revision: https://reviews.llvm.org/D82943
2020-07-07 09:33:20 +01:00
David Sherwood 79d34a5a1b [SVE][CodeGen] Fix bug when falling back to DAG ISel
In an earlier commit 584d0d5c17 I
added functionality to allow AArch64 CodeGen support for falling
back to DAG ISel when Global ISel encounters scalable vector
types. However, it seems that we were not falling back early
enough as llvm::getLLTForType was still being invoked for scalable
vector types.

I've added a new fallback function to the call lowering class in
order to catch this problem early enough, rather than wait for
lowerFormalArguments to reject scalable vector types.

Differential Revision: https://reviews.llvm.org/D82524
2020-07-07 09:23:04 +01:00
David Sherwood c061e56e88 [CodeGen] Fix warnings in sve-vector-splat.ll and sve-trunc.ll
This patch fixes all remaining warnings in:

  llvm/test/CodeGen/AArch64/sve-trunc.ll
  llvm/test/CodeGen/AArch64/sve-vector-splat.ll

I hit some warnings related to getCopyPartsToVector. I fixed two
issues:

1. In widenVectorToPartType() we assumed that we'd always be
using BUILD_VECTOR nodes to expand from one vector type to another,
which is incorrect for scalable vector types. I've fixed this for now
by simply bailing out immediately for scalable vectors.
2. In getCopyToPartsVector() I've changed the code to compare
the element counts of different types.

Differential Revision: https://reviews.llvm.org/D83028
2020-07-07 09:21:47 +01:00
Nemanja Ivanovic 1b1539712e [PowerPC] Do not RAUW combined nodes in VECTOR_SHUFFLE legalization
When legalizing shuffles, we make an attempt to combine it into
a PPC specific canonical form that avoids a need for a swap. If the
combine is successful, we RAUW the node and the custom legalization
replaces the now dead node instead of the one it should replace.
Remove that erroneous call to RAUW.
2020-07-06 22:09:28 -05:00
Xiang1 Zhang 939d8309db [X86-64] Support Intel AMX Intrinsic
INTEL ADVANCED MATRIX EXTENSIONS (AMX).
AMX is a new programming paradigm, it has a set of 2-dimensional registers
(TILES) representing sub-arrays from a larger 2-dimensional memory image and
operate on TILES.

These intrinsics use direct TMM register number as its params.

Spec can be found in Chapter 3 here https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D83111
2020-07-07 10:13:40 +08:00
Biplob Mishra 0c6b6e28e7 [PowerPC] Implement Vector Splat Immediate Builtins in Clang
Implements builtins for the following prototypes:
  vector signed int vec_splati (const signed int);
  vector float vec_splati (const float);
  vector double vec_splatid (const float);
  vector signed int vec_splati_ins (vector signed int, const unsigned int,
                                    const signed int);
  vector unsigned int vec_splati_ins (vector unsigned int, const unsigned int,
                                      const unsigned int);
  vector float vec_splati_ins (vector float, const unsigned int, const float);

Differential Revision: https://reviews.llvm.org/D82520
2020-07-06 20:29:33 -05:00
Amy Kwan c13e3e2c2e [PowerPC][Power10] Exploit the xxsplti32dx instruction when lowering VECTOR_SHUFFLE.
This patch aims to exploit the xxsplti32dx XT, IX, IMM32 instruction when lowering VECTOR_SHUFFLEs.
We implement lowerToXXSPLTI32DX when lowering vector shuffles to check if:
- Element size is 4 bytes
- The RHS is a constant vector (and constant splat of 4-bytes)
- The shuffle mask is a suitable mask for the XXSPLTI32DX instruction where it is one of the 32 masks:
<0, 4-7, 2, 4-7>
<4-7, 1, 4-7, 3>

Differential Revision: https://reviews.llvm.org/D83245
2020-07-06 20:28:38 -05:00
Sanjay Patel ea71ba11ab [DAGCombiner] reassociate reciprocal sqrt expression to eliminate FP division
X / (fabs(A) * sqrt(Z)) --> X / sqrt(A*A*Z) --> X * rsqrt(A*A*Z)

In the motivating case from PR46406:
https://bugs.llvm.org/show_bug.cgi?id=46406
...this is restoring the sequence that was originally in the source code.
We extracted a term from within the sqrt because we do not know in
instcombine whether a target will expand a sqrt call.
Note: we could say that the transform in IR should be restricted, but
that would not solve the problem if the source was originally in the
pattern shown here.

This is a gray area for fast-math-flag requirements. I think we should at
least check fast-math-flags on the fdiv and fmul because I view this
transform as 2 pieces: reassociate the fmul operands and form reciprocal
from the fdiv (as with the existing transform). We could argue that the
sqrt also needs FMF, but that was not required before, so we should change
that in a follow-up patch if that seems better.

We don't currently have a way to check that the target will produce a sqrt
or recip estimate without actually creating nodes (the APIs are SDValue
getSqrtEstimate() and SDValue getRecipEstimate()), so we clean up
speculatively created nodes if we are not able to create an estimate.
The x86 test with doubles verifies that we are not changing a test with
no estimate sequence.

Differential Revision: https://reviews.llvm.org/D82716
2020-07-06 19:12:21 -04:00
Wouter van Oortmerssen 16d83c395a [WebAssembly] Added 64-bit memory.grow/size/copy/fill
This covers both the existing memory functions as well as the new bulk memory proposal.
Added new test files since changes where also required in the inputs.

Also removes unused init/drop intrinsics rather than trying to make them work for 64-bit.

Differential Revision: https://reviews.llvm.org/D82821
2020-07-06 12:49:50 -07:00
Matt Arsenault c19c153e74 AMDGPU: Don't ignore carry out user when expanding add_co_pseudo
This was resulting in a missing vreg def in the use select
instruction.

The output of the pseudo doesn't make sense, since it really shouldn't
have the vreg output in the first place, and instead an implicit scc
def to match the real scalar behavior.

We could have easier to understand tests if we selected scalar
versions of the [us]{add|sub}.with.overflow intrinsics.

This does still end up producing vector code in the end, since it gets
moved later.
2020-07-06 14:28:01 -04:00
Luís Marques 61c2a0bb82 [RISCV] Fold ADDIs into load/stores with nonzero offsets
We can often fold an ADDI into the offset of load/store instructions:

   (load (addi base, off1), off2) -> (load base, off1+off2)
   (store val, (addi base, off1), off2) -> (store val, base, off1+off2)

This is possible when the off1+off2 continues to fit the 12-bit immediate.
We remove the previous restriction where we would never fold the ADDIs if
the load/stores had nonzero offsets. We now do the fold the the resulting
constant still fits a 12-bit immediate, or if off1 is a variable's address
and we know based on that variable's alignment that off1+offs2 won't overflow.

Differential Revision: https://reviews.llvm.org/D79690
2020-07-06 17:32:57 +01:00
jasonliu 6d3ae365bd [XCOFF][AIX] Give symbol an internal name when desired symbol name contains invalid character(s)
Summary:

When a desired symbol name contains invalid character that the
system assembler could not process, we need to emit .rename
directive in assembly path in order for that desired symbol name
to appear in the symbol table.

Reviewed By: hubert.reinterpretcast, DiggerLin, daltenty, Xiangling_L

Differential Revision: https://reviews.llvm.org/D82481
2020-07-06 15:49:15 +00:00
Sanjay Patel dbfcf6eb72 [x86] add tests for vector select with non-splat bit-test condition; NFC
Goes with D83181.
2020-07-06 09:50:47 -04:00
Matt Arsenault a5b9ad7e9a AMDGPU/GlobalISel: Don't emit code for unused kernel arguments 2020-07-06 09:04:06 -04:00
Matt Arsenault 581f1823cd AMDGPU/GlobalISel: Fix hardcoded register number checks in test 2020-07-06 09:01:59 -04:00
Matt Arsenault 7b76a5c8a2 AMDGPU: Fix fixed ABI SGPR arguments
The default constructor wasn't setting isSet o the ArgDescriptor, so
while these had the value set, they were treated as missing. This only
ended up mattering in the indirect call case (and for regular calls in
GlobalISel, which current doesn't have a way to support the variable
ABI).
2020-07-06 09:01:18 -04:00
Matt Arsenault bcff3deaa1 AMDGPU/GlobalISel: Add some missing return tests 2020-07-06 09:01:18 -04:00
Simon Pilgrim d6c72bdca2 [X86][XOP] Add XOP target vselect-pcmp tests
Noticed in the D83181 that XOP can probably do a lot more than other targets due to its vector shifts and vpcmov instructions
2020-07-06 13:58:26 +01:00
Simon Pilgrim c37400f6e7 Regenerate subreg liverange tests. NFC.
To simplify the diffs in a patch in development.
2020-07-06 13:58:25 +01:00
Simon Pilgrim f6bd1bd855 Regenerate neon copy tests. NFC.
To simplify the diffs in a patch in development.
2020-07-06 13:58:25 +01:00
Esme-Yi 0607c8df7f [PowerPC] Legalize SREM/UREM directly on P9.
Summary: As Bugzilla-35090 reported, the rationale for using custom lowering SREM/UREM should no longer be true. At the IR level, the div-rem-pairs pass performs the transformation where the remainder is computed from the result of the division when both a required. We should now be able to lower these directly on P9. And the pass also fixed the problem that divide is in a different block than the remainder. This is a patch to remove redundant code and make SREM/UREM legal directly on P9.

Reviewed By: lkail

Differential Revision: https://reviews.llvm.org/D82145
2020-07-06 11:47:31 +00:00
David Green 74ca67c109 [ARM] Remove hasSideEffects from FP converts
Whether an instruction is deemed to have side effects in determined by
whether it has a tblgen pattern that emits a single instruction.
Because of the way a lot of the the vcvt instructions are specified
either in dagtodag code or with patterns that emit multiple
instructions, they don't get marked as not having side effects.

This just marks them as not having side effects manually. It can help
especially with instruction scheduling, to not create artificial
barriers, but one of these tests also managed to produce fewer
instructions.

Differential Revision: https://reviews.llvm.org/D81639
2020-07-05 16:23:24 +01:00
Simon Pilgrim 011d73202c [X86][SSE] Add PACKSS/PACKUS style patterns tests
Similar to the proposed generic code generated by D61129 - there's still some shuffle combining improvements to go before that patch is ready.
2020-07-05 16:18:23 +01:00
Fangrui Song aed6a1b137 Add tests for clang -fno-zero-initialized-in-bss and llc -nozero-initialized-in-bss
And rename the CC1 option.
2020-07-04 23:26:57 -07:00
Craig Topper 120c5f1057 [DAGCombiner] Don't fold zext_vector_inreg/sext_vector_inreg(undef) to undef. Fold to 0.
zext_vector_inreg needs to produces 0s in the extended bits and
sext_vector_inreg needs to produce upper bits that are all the
same. So we should fold them to a 0 vector instead of undef.

Fixes PR46585.
2020-07-04 11:42:53 -07:00
Craig Topper 21d8f66d20 [X86] Add test caes for pr46585. NFC 2020-07-04 11:42:50 -07:00
Craig Topper e652c0f8f3 [X86] Teach lowerShuffleAsBlend to use bit blend for v16i8/v32i8/v16i16 when avx512vl is enabled but not avx512bw.
Probably not super important since there are no real CPUs with
avx512vl and not avx512bw. But vpternlog should be better than
vblendvb.

I do wonder if we should use vpternlog even with BWI. We
currently use vblendmb or vpblendmw by putting the mask into a GPR
and moving it to a k-register. But I don't think we hoist the
GPR to k-register copy in machine LICM. Using VPTERNLOG would use
a constant pool load, but has the advantage that we're pretty good
at hoisting and rematerializing those.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D83156
2020-07-04 10:26:56 -07:00
Craig Topper b4eb415a99 [X86] Disable VPBLENDVB formation in combineLogicBlendIntoPBLENDV if VPTERNLOG is supported.
VPBLENDVB is multiple uops while VPTERNLOG is a single uop. So
we should use that instead.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D83155
2020-07-04 10:12:19 -07:00
Simon Pilgrim 56a8a5c9fe [DAG] matchBinOpReduction - match subvector reduction patterns beyond a matched shufflevector reduction
Currently matchBinOpReduction only handles shufflevector reduction patterns, but in many cases these only occur in the final stages of a reduction, once we're down to legal vector widths.

Before this its likely that we are performing reductions using subvector extractions to repeatedly split the source vector in half and perform the binop on the halves.

Assuming we've found a non-partial reduction, this patch continues looking for subvector reductions as far as it can beyond the last shufflevector.

Fixes PR37890
2020-07-04 15:28:15 +01:00
Simon Pilgrim 7bfe4102a9 [X86][SSE] Add add/fadd reduction shuffle+subvector tests
Tests based on the PR37890 test cases - the vector combine pass should leave us with a reduction chain ending in extract(add(x,shuffle(x,1,-1,...))), but the higher reduction stages will be subvector extractions not shuffles.
2020-07-04 15:10:09 +01:00
Simon Pilgrim 71f342d6c3 [X86][AVX] Fold PACK(LOSUBVECTOR(SHUFFLE(X)),HISUBVECTOR(SHUFFLE(X))) -> SHUFFLE(PACK(LOSUBVECTOR(X),HISUBVECTOR(X)))
Using PACK for truncations leaves us with intermediate shuffles that can be tricky to remove while the truncation tree is being formed.

This fold helps pull out the PERMQ case which is one of the most common, avoiding some costly lane-crossing shuffles.

A future patch will begin adding more general shuffle folding, which we should be able to use for HADD/HSUB as well.
2020-07-04 13:54:30 +01:00
Paul Walker 7356b4243a [SVE] Fix invalid assert in expand_DestructiveOp.
AArch64ExpandPseudo::expand_DestructiveOp contains an assert to
ensure the destructive operand's register is unique.  However,
this is only required when psuedo expansion emits a movprfx.

A simple example when a movprfx is not required is
  Z0 = FADD_ZPZZ_UNDEF_S P0, Z0, Z0
which expands to an unprefixed FADD_ZPmZ_S instruction.

This patch moves the assert to the places where a movprfx is emitted.

Differential Revision: https://reviews.llvm.org/D83029
2020-07-04 09:21:40 +00:00
Craig Topper fed432523e [X86] Directly emit VPTERNLOG from canonicalizeBitSelect when possible.
Seems to produce better results on some rotate tests. And is
neutral for other tests.
2020-07-03 22:08:28 -07:00
Kai Luo c352e0885a [PowerPC] Implement probing for prologue
This patch is part of supporting `-fstack-clash-protection`. Implemented
probing when emitting prologue.

Differential Revision: https://reviews.llvm.org/D81460
2020-07-04 03:07:08 +00:00
Craig Topper e75f2d5a8c [X86] Add matching support for X86ISD::ANDNP to X86DAGToDAGISel::tryVPTERNLOG. 2020-07-03 17:50:35 -07:00
Thomas Lively 8df30d988e [WebAssembly] Do not omit range checks for i64 switches
Summary:
Since the br_table instruction takes an i32, switches over i64s (and
larger integers) must use the i32.wrap_i64 instruction to truncate the
table index. This truncation makes numbers just over 2^32
indistinguishable from small numbers, so it was a miscompilation to
omit the range check preceding these br_tables. This change fixes the
problem by skipping the "fixing" of the br_table when the range check
is an i64 instruction.

Fixes PR46447.

Reviewers: aheejin, dschuff, kripken

Reviewed By: kripken

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83017
2020-07-03 17:15:39 -07:00
Sanjay Patel 26543f1c0c [x86] improve codegen for bit-masked vector compare and select (PR46531)
We canonicalize patterns like:
  %s = lshr i32 %a0, 1
  %t = trunc i32 %s to i1

to:
  %a = and i32 %a0, 2
  %c = icmp ne i32 %a, 0

...in IR, but the bit-shifting original sequence may be better for x86 vector codegen.

I tried several variants of the transform, and it's tricky to not induce regressions.
In particular, I did not find a way to cleanly handle non-splat constants, so I've left
that as a TODO item here (currently negative tests for those are included). AVX512
resulted in some diffs, but didn't look meaningful, so I left that out too. Some of
the 256-bit AVX1 diffs are questionable, but close enough that they are probably
insignificant.

Differential Revision: https://reviews.llvm.org/D83073.
2020-07-03 17:31:57 -04:00
Biplob Mishra 0939e04e41 [PowerPC] Implement Vector Insert Builtins in LLVM/Clang
Implements vec_insertl() and vec_inserth().

Differential Revision: https://reviews.llvm.org/D82365
2020-07-03 15:30:41 -05:00
jasonliu 572dde55ee [XCOFF][AIX] Use 'L..' instead of '.L' for getPrivateGlobalPrefix in DataLayout
Summary:
D80831 changed part of the prefix usage for AIX.
But there are other places getting prefix from DataLayout.
This patch intends to make prefix usage consistent on AIX.

Reviewed by: hubert.reinterpretcast, daltenty

Differential Revision: https://reviews.llvm.org/D81270
2020-07-03 18:25:14 +00:00
David Green 9e03547cab [ARM][HWLoops] Create hardware loops for sibling loops
Given a loop with two subloops, it should be possible for both to be
converted to hardware loops. That's what this patch does, simply enough.
It slightly alters the loop iterating order to try and convert all
subloops. If one (or more) succeeds, it stops as before.

Differential Revision: https://reviews.llvm.org/D78502
2020-07-03 17:20:02 +01:00
Sean Fertile 484a36b97d Enable basepointer for AIX.
Differential Revision: https://reviews.llvm.org/D82030
2020-07-03 11:55:49 -04:00
Petre-Ionut Tudor af80a4353e [ARM] Generate [SU]RHADD from (b - (~a)) >> 1
Summary:
Teach LLVM to recognize the above pattern, which is usually a
transformation of (a + b + 1) >> 1, where the operands are either
signed or unsigned types.

Subscribers: kristof.beyls, hiraditya, danielkiss, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82669
2020-07-03 16:00:06 +01:00
vpykhtin bb69ca822a [AMDGPU] Don't combine DPP if DPP register is used more than once per instruction
Reviewers: arsenm, rampitec, foad

Reviewed By: rampitec, foad

Subscribers: wuzish, kzhuravl, nemanjai, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kbarton, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82551
2020-07-03 15:08:26 +03:00
Luke Geeson 8bf99f1e6f [ARM] Add Cortex-A77 Support for Clang and LLVM
This patch upstreams support for the Arm-v8 Cortex-A77
processor for AArch64 and ARM.

In detail:
- Adding cortex-a77 as a cpu option for aarch64 and arm targets in clang
- Cortex-A77 CPU name and ProcessorModel in llvm

details of the CPU can be found here:
https://www.arm.com/products/silicon-ip-cpu/cortex-a/cortex-a77

and a similar submission to GCC can be found here:
e0664b7a63

The following people contributed to this patch:
- Luke Geeson
- Mikhail Maltsev

Reviewers: t.p.northover, dmgreen, ostannard, SjoerdMeijer

Reviewed By: dmgreen

Subscribers: dmgreen, kristof.beyls, hiraditya, danielkiss, cfe-commits,
llvm-commits, miyuki

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D82887
2020-07-03 13:00:54 +01:00
serge-sans-paille c8ef3d5a2f Fix stack-clash probing for large static alloca
Differential Revision: https://reviews.llvm.org/D82867
2020-07-03 09:22:03 +02:00
Kai Luo 03828e38c3 [PowerPC] Implement probing for dynamic stack allocation
This patch is part of supporting `-fstack-clash-protection`. Mainly do
such things compared to existing `lowerDynamicAlloc`

- Added a new pseudo instruction PPC::PREPARE_PROBED_ALLOC to get
  actual frame pointer and final stack pointer.
- Synthesize a loop to probe by blocks.
- Use DYNAREAOFFSET to get MaxCallFrameSize which is calculated in
  prologepilog.

Differential Revision: https://reviews.llvm.org/D81358
2020-07-03 05:36:40 +00:00
Craig Topper 52855ed099 [X86] Add back support for matching VPTERNLOG from back to back logic ops.
I think this mostly looks ok. The only weird thing I noticed was
a couple rotate vXi8 tests picked up an extra logic op where we have

(and (or (and), (andn)), X). Previously we matched the (or (and), (andn))
to vpternlog, but now we match the (and (or), X) and leave the and/andn
unmatched.
2020-07-02 22:11:52 -07:00
Carl Ritson 42ca2070d7 [AMDGPU] Insert PS early exit at end of control flow
Exit early if the exec mask is zero at the end of control flow.
Mark the ends of control flow during control flow lowering and
convert these to exits during the insert skips pass.

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D82737
2020-07-03 14:04:34 +09:00
Carl Ritson 7ec6927bad Revert "[AMDGPU] Insert PS early exit at end of control flow"
This reverts commit 2bfcacf0ad.

There appears to be an issue to analysis preservation.
2020-07-03 13:03:33 +09:00
Carl Ritson 2bfcacf0ad [AMDGPU] Insert PS early exit at end of control flow
Exit early if the exec mask is zero at the end of control flow.
Mark the ends of control flow during control flow lowering and
convert these to exits during the insert skips pass.

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D82737
2020-07-03 12:26:28 +09:00
Carl Ritson a3daa3f75a [AMDGPU] Unify early PS termination blocks
Generate a single early exit block out-of-line and branch to this
if all lanes are killed. This avoids branching if lanes are active.

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D82641
2020-07-03 09:58:05 +09:00
Craig Topper acf6c94a38 [X86] Teach lower512BitShuffle to try bitmask and bitblend before splitting v32i16/v64i8 on av512f only targets.
We consider v32i16/v64i8 to be legal types on avx512f, but we
don't have most operations until avx512bw. But we can use
and/or/xor operations. So try those before splitting.

This is especially helpful since we turn some ands with constant
masks into shuffles in early DAG combines. So we should make sure
we recover those back to AND.
2020-07-02 15:35:48 -07:00
Biplob Mishra ca464639a1 [PowerPC] Implement Vector Blend Builtins in LLVM/Clang
Implements vec_blendv()

Differential Revision: https://reviews.llvm.org/D82774
2020-07-02 16:52:52 -05:00
Sanjay Patel 4585e3509c [x86] remove redundant tests with no check lines; NFC
These were accidentally included with:
rGb93e6650c8ac
2020-07-02 17:45:57 -04:00
Sanjay Patel bc110de78a [SelectionDAG] don't split branch on logic-of-vector-compares
SelectionDAGBuilder converts logic-of-compares into multiple branches based
on a boolean TLI setting in isJumpExpensive(). But that probably never
considered the pattern of extracted bools from a vector compare - it seems
unlikely that we would want to turn vector logic into control-flow.

The motivating x86 reduction case is shown in PR44565:
https://bugs.llvm.org/show_bug.cgi?id=44565
...and that test shows the expected improvement from using pmovmsk codegen.

For AArch64, I modified the test to include an extra op because the simpler
test gets transformed by a codegen invocation of SimplifyCFG.

Differential Revision: https://reviews.llvm.org/D82602
2020-07-02 17:05:24 -04:00
Craig Topper 912cd8a37f [X86] Add vpternlog to the broadcast unfolding table. 2020-07-02 13:43:44 -07:00
Craig Topper e87a95b5c2 [X86] Add test case for unfolding broadcast load from vpternlog. 2020-07-02 13:43:43 -07:00
Sanjay Patel b93e6650c8 [x86] add tests for vector select with bit-test condition; NFC 2020-07-02 16:10:08 -04:00
Craig Topper 204a21317a [X86] Modify the conditions for when we stop making v16i8/v32i8 rotate Custom based on having avx512 features.
The comments here indicate that we prefer to promote the shifts
instead of allowing rotate to be pattern matched. But we weren't
taking into account whether 512-bit registers are enabled or
whethever we have vpsllvw/vpsrlvw instructions.

splatvar_rotate_v32i8 is a slight regrssion, but the other cases
are neutral or improved.
2020-07-02 13:07:51 -07:00
Craig Topper cdf84c7b6b [X86] Add test cases for v32i8 rotate with min-legal-vector-width=256
We currently don't mark ROTL as custom when avx512bw is enabled
under the assumption we'll be able to promote the shifts in the
rotate idiom. But if we don't have 512-bit registers enabled we
can't promote.
2020-07-02 13:07:50 -07:00
Biplob Mishra 286073484f [PowerPC]Implement Vector Permute Extended Builtin
Implements vector permute builtin: vec_permx()

Differential Revision: https://reviews.llvm.org/D82869
2020-07-02 14:53:18 -05:00
Nemanja Ivanovic a701dc5510 [PowerPC] Remove undefs from splat input when changing shuffle mask
As of 1fed131660, we have code that
changes shuffle masks so that we can put the shuffle in a canonical
form that can be matched to a single instruction. However, it
does not properly account for undef elements in the BUILD_VECTOR
that is the RHS splat so we can end up with undefs where they
shouldn't be. This patch converts the splat input with undefs to
one without.
2020-07-02 12:26:56 -05:00
Dmitry Preobrazhensky 1c9d681092 [AMDGPU][CODEGEN] Added support of new inline assembler constraints
Added support for constraints 'I', 'J', 'B', 'C', 'DA', 'DB'.

See https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html#Machine-Constraints.

Reviewers: arsenm, rampitec

Differential Revision: https://reviews.llvm.org/D81651
2020-07-02 17:20:15 +03:00
Sander de Smalen 075c440f7b [AArch64][SVE] Put zeroing pseudos and patterns under flag.
This patch puts the _ZERO pseudos and corresponding patterns
under the predicate 'UseExperimentalZeroingPseudos', so that they
can be enabled/disabled through compile flags.

This is done because the zeroing pseudos use MOVPRFX to do merging of
the inactive lanes, but it depends on the uarch whether this operation
is actually merged with the destructive operation. If not, it may be
more profitable to use a SELECT and to give the compiler the freedom to
schedule these instructions as normal, rather than keeping them bundled
together. Additionally, this feature is not yet fully implemented and
there are still known bugs (see D80410) that need to be resolved before
the 'experimental' can be dropped from the name.

Reviewers: paulwalker-arm, cameron.mcinally, efriedma

Reviewed By: paulwalker-arm

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82780
2020-07-02 14:24:33 +01:00
Kerry McLaughlin fd6193d5ea [AArch64][SVE] Add reg+imm addressing mode for unpredicated stores
Reviewers: sdesmalen, efriedma, david-arm

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, danielkiss, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82985
2020-07-02 12:00:01 +01:00
Roman Lebedev 58a56ef4e7
Regenerate llvm/test/CodeGen/X86/optimize-max-0.ll
It surprizingly appears to be affected by the last SCEV patch
2020-07-02 13:35:30 +03:00
David Sherwood 00f5921609 [SVE] Add warnings checks in four more LLVM SVE tests
I have added CHECK lines to the following tests:

  llvm/test/CodeGen/AArch64/sve-breakdown-scalable-vectortype.ll
  llvm/test/CodeGen/AArch64/sve-calling-convention-tuple-types.ll
  llvm/test/CodeGen/AArch64/sve-intrinsics-create-tuple.ll
  llvm/test/CodeGen/AArch64/sve-intrinsics-loads.ll

since they are now free of warnings related to invalid use of
EVT::getVectorNumElements() and VectorType::getNumElements().

Differential Revision: https://reviews.llvm.org/D82957
2020-07-02 10:43:17 +01:00
Jay Foad 6f1694759c [AMDGPU] Fix formatting in MIR tests 2020-07-02 10:27:34 +01:00
Sander de Smalen 143e324e75 [CodeGen][SVE] Don't drop scalable flag in DAGCombiner::visitEXTRACT_SUBVECTOR
There was a rogue 'assert' in AArch64ISelLowering for the tuple.get intrinsics,
that shouldn't really have been there (I suspect this was a remnant from when
we expected the wider vector always to have come from a vector CONCAT).

When I tried to create a more minimal reproducer, I found a bug in
DAGCombiner where it drops the scalable flag when trying to fold:

      extract_subv (bitcast X), Index --> bitcast (extract_subv X, Index')

This patch fixes both issues.

Reviewers: david-arm, efriedma, spatel

Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82910
2020-07-02 10:16:43 +01:00
Sander de Smalen 07bda98b6a [AArch64][SVE] Add unpred load/store patterns for bf16 types
Reviewers: kmclaughlin, c-rhodes, efriedma

Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82909
2020-07-02 10:01:24 +01:00
Qiu Chaofan aa4fd7d848 [NFC] Fix typo in triples from unkown to unknown 2020-07-02 16:21:54 +08:00
Nicholas Guy dc8e4d8566 [ARM] Rearrange SizeReduction when using -Oz
Move the Thumb2SizeReduce pass to before IfConversion when optimising
for minimal code size.

Running the Thumb2SizeReduction pass before IfConversionallows T1
instructions to propagate to the final output, rather than the
ifConverter modifying T2 instructions and preventing them from being
reduced later.

This change does introduce a regression regarding execution time, so
it's only applied when optimising for size.

Running the LLVM Test Suite with this change produces a geomean
difference of -0.1% for the size..text metric.

Differential Revision: https://reviews.llvm.org/D82439
2020-07-02 09:19:38 +01:00
Pushpinder Singh e1a31f52cd [AMDGPU] Control num waves per EU for implicit work-group size
Summary:
If amdgpu-flat-work-group-size is not specified in LLVM IR, the backend
uses default value of 1024. For this, minimum waves per EU should be 4.
However, backend is still setting minimum value to 1 instead of calculated
value. This is not observed normally as frontend always provide
amdgpu-flat-work-group-size attribute.

Reviewers: rampitec, b-sumner, sameerds, msearles

Reviewed By: rampitec

Subscribers: qcolombet, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81991
2020-07-01 22:53:52 -04:00
Biplob Mishra 88874f0746 [PowerPC]Implement Vector Shift Double Bit Immediate Builtins
Implement Vector Shift Double Bit Immediate Builtins in LLVM/Clang.
  * vec_sldb ();
  * vec_srdb ();

Differential Revision: https://reviews.llvm.org/D82440
2020-07-01 20:34:53 -05:00
Xiang1 Zhang aded4f0cc0 [X86-64] Support Intel AMX instructions
Summary:
INTEL ADVANCED MATRIX EXTENSIONS (AMX).
AMX is a new programming paradigm, it has a set of 2-dimensional registers
(TILES) representing sub-arrays from a larger 2-dimensional memory image and
operate on TILES.

Spec can be found in Chapter 3 here https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html

Reviewers: LuoYuanke, annita.zhang, pengfei, RKSimon, xiangzhangllvm

Reviewed By: xiangzhangllvm

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82705
2020-07-02 08:57:04 +08:00
Anil Mahmud c5b4f03b53 [PowerPC] Exploit xxspltiw and xxspltidp instructions
Exploits the VSX Vector Splat Immediate Word and
VSX Vector Splat Immediate Double Precision instructions:

  xxspltiw XT,IMM32
  xxspltidp XT,IMM32

Differential Revision: https://reviews.llvm.org/D82911
2020-07-01 19:18:29 -05:00
Matt Arsenault d2e74fad20 AMDGPU: Set more mov flags on V_ACCVGPR_{READ|WRITE}_B32
This fixes extra copies when materializing constants in AGPRs. This
made it a lot harder to trigger the spilling in spill-agpr.ll
2020-07-01 18:58:59 -04:00
Matt Arsenault a230f1db3f AMDGPU: Fix missing tracksRegLiveness in tests
I have no idea why this is considered optional, or why it's not the
default. Also add uses of the copied registers for more useful
liveness testing.
2020-07-01 18:58:59 -04:00
Stanislav Mekhanoshin 54e2dc7537 [AMDGPU] Limit promote alloca to vector with VGPR budget
Allow only up to 1/4 of available VGPRs for the vectorization
of any given alloca.

Differential Revision: https://reviews.llvm.org/D82990
2020-07-01 15:57:24 -07:00
Ben Shi 003a086ffc [RISCV][NFC] Pre-commit tests for D82660 2020-07-01 22:46:32 +01:00
Craig Topper 361853c96f [LegalizeTypes] Properly handle the case when UpdateNodeOperands in PromoteIntOp_MLOAD triggers CSE instead of updating the node in place.
The caller can't handle the node having multiple results like a
masked load does. So we need to detect the case and do our own
result replacement.

Fixes PR46532.
2020-07-01 11:48:50 -07:00
Matt Arsenault ba3bafe46a AMDGPU: Convert AGPR copy test to generated checks 2020-07-01 13:59:13 -04:00
Matt Arsenault 14fe4607f1 AMDGPU: Support commuting register and global operand 2020-07-01 13:59:13 -04:00
Matt Arsenault a21544ad11 AMDGPU: Fix handling of target flags when commuting instruction
If the original register operand had a subregister, it wasn't getting
cleared. This resulted in reinterpreted the subreg index as
unrecognized target flags, which produced unparseable MIR.
2020-07-01 13:59:13 -04:00
Matt Arsenault 16ea23ff78 AMDGPU: Clear subreg when folding immediate copies
This was getting reinterpreted as operand target flags, and appearing
as as <unknown target flag>, resulting in unparseable MIR.
2020-07-01 13:59:13 -04:00
David Sherwood f11305780f [CodeGen] Fix warnings in DAGCombiner::visitSCALAR_TO_VECTOR
In visitSCALAR_TO_VECTOR we try to optimise cases such as:

  scalar_to_vector (extract_vector_elt %x)

into vector shuffles of %x. However, it led to numerous warnings
when %x is a scalable vector type, so for now I've changed the
code to only perform the combination on fixed length vectors.
Although we probably could change the code to work with scalable
vectors in certain cases, without a proper profit analysis it
doesn't seem worth it at the moment.

This change fixes up one of the warnings in:

  llvm/test/CodeGen/AArch64/sve-merging-stores.ll

I've also added a simplified version of the same test to:

  llvm/test/CodeGen/AArch64/sve-fp.ll

which already has checks for no warnings.

Differential Revision: https://reviews.llvm.org/D82872
2020-07-01 18:47:13 +01:00
Yonghong Song 3eacfdc72f [BPF] Fix a BTF gen bug related to a pointer struct member
Currently, BTF generation stops at pointer struct members
if the pointee type is a struct. This is to avoid bloating
generated BTF size. The following is the process to
correctly record types for these pointee struct types.
  - During type traversal stage, when a struct member, which
    is a pointer to another struct, is encountered,
    the pointee struct type, keyed with its name, is
    remembered in a Fixup map.
  - Later, when all type traversal is done, the Fixup map
    is scanned, based on struct name matching, to either
    resolve as pointing to a real already generated type
    or as a forward declaration.

Andrii discovered a bug if the struct member pointee struct
is anonymous. In this case, a struct with empty name is
recorded in Fixup map, and later it happens another anonymous
struct with empty name is defined in BTF. So wrong type
resolution happens.

To fix the problem, if the struct member pointee struct
is anonymous, pointee struct type will be generated in
stead of being put in Fixup map.

Differential Revision: https://reviews.llvm.org/D82976
2020-07-01 09:55:01 -07:00
James Y Knight 4b0aa5724f Change the INLINEASM_BR MachineInstr to be a non-terminating instruction.
Before this instruction supported output values, it fit fairly
naturally as a terminator. However, being a terminator while also
supporting outputs causes some trouble, as the physreg->vreg COPY
operations cannot be in the same block.

Modeling it as a non-terminator allows it to be handled the same way
as invoke is handled already.

Most of the changes here were created by auditing all the existing
users of MachineBasicBlock::isEHPad() and
MachineBasicBlock::hasEHPadSuccessor(), and adding calls to
isInlineAsmBrIndirectTarget or mayHaveInlineAsmBr, as appropriate.

Reviewed By: nickdesaulniers, void

Differential Revision: https://reviews.llvm.org/D79794
2020-07-01 12:51:50 -04:00
David Green ca4c1ad854 [Outliner] Set nounwind for outlined functions
This prevents the outlined functions from pulling in a lot of unnecessary code
in our downstream libraries/linker. Which stops outlining making codesize
worse in c++ code with no-exceptions.

Differential Revision: https://reviews.llvm.org/D57254
2020-07-01 17:18:34 +01:00
Luís Marques b2aa546b07 [RISCV] Temporarily move riscv-expand-pseudo pass to PreEmitPass2
The pass to split atomic and non-atomic RISC-V pseudo-instructions was itself
split into two passes in D79635 / commit rG2cb0644f90b7, with the splitting of
non-atomic instructions being moved to the PreSched2 phase. A comment was
added to D79635 detailing a case where this caused problems, so this commit
moves the non-atomic split pass back to the PreEmitPass2 phase. This allows
the bulk of the changes from D79635 to remain committed, while addressing the
the reported problem (the pass split is now almost NFC). Once the root problem
is fixed we can move the (non-atomic) instruction splitting pass back to
earlier in the pipeline.
2020-07-01 16:26:02 +01:00
Kazushi (Jam) Marukawa 1952055892 [VE] Support symbol with offset value
Summary: Support symbol with offset value as a VEMCExpr.

Reviewers: simoll, k-ishizaka

Reviewed By: simoll

Subscribers: hiraditya, llvm-commits

Tags: #llvm, #ve

Differential Revision: https://reviews.llvm.org/D82734
2020-07-01 23:55:27 +09:00
Stefan Pintilie b294e00fb0 [PowerPC] Fix for PC Relative call protocol
The situation where the caller uses a TOC and the callee does not
but is marked as clobbers the TOC (st_other=1) was not being compiled
correctly if both functions where in the same object file.

The call site where we had `callee` was missing a nop after the call.
This is because it was assumed that since the two functions where in
the same DSO they would be sharing a TOC. This is not the case if the
callee uses PC Relative because in that case it may clobber the TOC.
This patch makes sure that we add the cnop correctly so that the
linker has a place to restore the TOC.

Reviewers: sfertile, NeHuang, saghir

Differential Revision: https://reviews.llvm.org/D81126
2020-07-01 07:08:41 -05:00
Simon Pilgrim b485586482 [X86][SSE] Fix targetShrinkDemandedConstant constant vector sign extensions
D82257/rG3521ecf1f8a3 was incorrectly sign-extending a constant vector from the lsb, this is fine if all the constant elements are 'allsignbits' in the active bits, but if only some of the elements are, then we are corrupting the constant values for those elements.

This fix ensures we sign extend from the msb of the active/demanded bits instead.
2020-07-01 12:12:53 +01:00
Simon Pilgrim 93707fe309 [X86][SSE] Add test showing incorrect sign-extension by targetShrinkDemandedConstant 2020-07-01 12:01:19 +01:00
Sam Elliott 7dc892661e [RISCV] Implement Hooks to avoid chaining SELECT
Summary:
This implements two hooks that attempt to avoid control flow for RISC-V. RISC-V
will lower SELECTs into control flow, which is not a great idea.

The hook `hasMultipleConditionRegisters()` turns off the following
DAGCombiner folds:
    select(C0|C1, x, y) <=> select(C0, x, select(C1, x, y))
    select(C0&C1, x, y) <=> select(C0, select(C1, x, y), y)

The second hook `setJumpIsExpensive` controls a flag that has a similar purpose
and is used in CodeGenPrepare and the SelectionDAGBuilder.

Both of these have the effect of ensuring more logic is done before fewer jumps.

Note: with the `B` extension, we may be able to lower select into a conditional
move instruction, so at some point these hooks will need to be guarded based on
enabled extensions.

Reviewed By: luismarques

Differential Revision: https://reviews.llvm.org/D79268
2020-07-01 11:56:31 +01:00
Sam Elliott c44266dc48 [RISCV][NFC] Add Test for (select (or B1, B2), X, Y)
Summary:
As shown, LLVM is keen to avoid logic and introduce selects (in DAGCombiner, and
other places). This leads to control flow on RISC-V which we should attempt to
avoid.

Reviewed By: luismarques

Differential Revision: https://reviews.llvm.org/D79267
2020-07-01 11:56:11 +01:00
Petar Avramovic 4b9ae1b7e5 AMDGPU/GlobalISel: Select init_exec intrinsic
Change imm with timm in pattern for SI_INIT_EXEC_LO and
remove regbank mappings for non register operands.

Differential Revision: https://reviews.llvm.org/D82885
2020-07-01 11:50:59 +02:00
Kerry McLaughlin 4c6683eafc [AArch64][SVE] Add reg+imm addressing mode for unpredicated loads
Reviewers: efriedma, sdesmalen, david-arm

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, danielkiss, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82893
2020-07-01 10:33:56 +01:00
Sam Parker f0ecfb789b [NFC][ARM] Add test. 2020-07-01 09:28:56 +01:00
Paul Walker a1aed80a35 [SVE] Relax merge requirement for IR based divides.
We currently lower SDIV to SDIV_MERGE_OP1. This forces the value
for inactive lanes in a way that can hamper register allocation,
however, the lowering has no requirement for inactive lanes.

Instead this patch replaces SDIV_MERGE_OP1 with SDIV_PRED thus
freeing the register allocator. Once done the only user of
SDIV_MERGE_OP1 is intrinsic lowering so I've removed the node
and perform ISel on the intrinsic directly. This also allows
us to implement MOVPRFX based zeroing in the same manner as SUB.

This patch also renames UDIV_MERGE_OP1 and [F]ADD_MERGE_OP1 for
the same reason but in the ADD cases the ISel code is already
as required.

Differential Revision: https://reviews.llvm.org/D82783
2020-07-01 08:18:42 +00:00
Saiyedul Islam 9182316395 [AMDGPU] Spill more than wavesize CSR SGPRs
In case of more than wavesize CSR SGPR spills, lanes of reserved VGPR were getting
overwritten due to wrap around.

Reserve a VGPR (when NumVGPRSpillLanes = 0, WaveSize, 2*WaveSize, ..) and when one
of the two conditions is true:
 1. One reserved VGPR being tracked by VGPRReservedForSGPRSpill is not yet reserved.
 2. All spill lanes of reserved VGPR(s) are full and another spill lane is required.

Reviewed By: arsenm, kerbowa

Differential Revision: https://reviews.llvm.org/D82463
2020-07-01 07:40:47 +00:00