Commit Graph

33568 Commits

Author SHA1 Message Date
Stanislav Mekhanoshin 8a9d48b46d [AMDGPU] Fixed lane mask in test. NFC. 2020-04-15 15:26:53 -07:00
Chris Bowler bee6c234ed [AIX][PowerPC] Implement caller byval arguments in stack memory
Differential Revision: https://reviews.llvm.org/D77578
2020-04-15 17:57:31 -04:00
Francesco Petrogalli 89680f25e8 [llvm][CodeGen] Rename SVE gather prefetch intrinsics. [NFC]
Summary:
The renaming is necessary to make the naming scheme uniform with other
gather/scatter load/stores SVE intrinsics.

The naming of variables and functions have been adapted to make it
explicit whether we are dealing with a scalar offset (which is
unscaled) or an index (which is scaled according to the data type of
the lanes of the vector).

Reviewers: andwar, sdesmalen, rengolin

Reviewed By: andwar

Subscribers: tschuett, hiraditya, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77839
2020-04-15 21:49:16 +01:00
Eli Friedman 7c10541e56 [SelectionDAG] Fix usage of Align constructing MachineMemOperands.
The "Align" passed into getMachineMemOperand etc. is the alignment of
the MachinePointerInfo, not the alignment of the memory operation.
(getAlign() on a MachineMemOperand automatically reduces the alignment
to account for this.)

We were passing on wrong (overconservative) alignment in a bunch of
places. Fix a bunch of these, mostly in legalization.  And while I'm
here, switch to the new Align APIs.

The test changes are all scheduling changes: the biggest effect of
preserving large alignments is that it improves alias analysis, so the
scheduler has more freedom.

(I was originally just trying to do a minor cleanup in
SelectionDAGBuilder, but I accidentally went deeper down the rabbit
hole.)

Differential Revision: https://reviews.llvm.org/D77687
2020-04-15 13:01:41 -07:00
Pavel Iliin b2dff0dbea [AArch64][NFC]Autogenerated checks. 2020-04-15 20:25:00 +01:00
Craig Topper 8dfb9627b7 [X86] Make v32i16/v64i8 legal types without avx512bw. Use custom splitting instead.
This moves v32i16/v64i8 to a model consistent with how we
treat integer types with avx1.

This does change the ABI for types vXi16/vXi8 vectors larger than
512 bits to pass in multiple zmms instead of multiple ymms. We'd
already hacked some code to make v64i8/v32i16 pass in zmm.

Cost model is still a bit of a mess. In some place I tried to
match existing behavior. But really we need to account for
splitting and concating costs. Cost model for shuffles is
especially pessimistic.

Differential Revision: https://reviews.llvm.org/D76212
2020-04-15 12:17:18 -07:00
Simon Pilgrim 2bcbf1319e [X86] Add generic cpu target for the slow division tests
Baseline for any change due to D75567
2020-04-15 19:38:29 +01:00
Sterling Augustine bf94c96007 Write ignored output to stdout, so this test runs on read-only filesystems. 2020-04-15 10:45:14 -07:00
Amara Emerson c22cb5bd31 [GlobalISel] Enable artifact combiner to combine starting from a G_MERGE_VALUES.
We generally only combine starting from users to defs in the artifact combiner,
but this doesn't catch cases where at the point of combining a G_UNMERGE we don't
yet have the opposite G_MERGE on input yet since we haven't legalized that far.

This change adds the users of a G_MERGE to the artifact combiner worklist if one
of the uses is a G_UNMERGE or G_TRUNC.

Differential Revision: https://reviews.llvm.org/D77931
2020-04-15 10:34:13 -07:00
Dominik Montada 1265899c5f Revert "[GlobalISel] Fix invalid combine of unmerge(merge) with intermediate cast"
This reverts commit bddac41b9f.
2020-04-15 18:47:39 +02:00
Dominik Montada bddac41b9f [GlobalISel] Fix invalid combine of unmerge(merge) with intermediate cast
Summary:
The combine for unmerge(cast(merge)) is only valid for vectors, but was
missing a corresponding check. Add a check that the operands are vectors
to avoid an invalid combine.

Without this check, the combiner would emit incorrect code for scalars
and pointers because the artifact cast (trunc/ext) only affects bits at
the end of the type, while this combine assumes that the casted bits
appear between meaningful bits.

This also uncovered a segmentation fault in the AMDGPU
InstructionSelector. The tests triggering this bug have been moved to
their own file and a check for the segmentation fault has been added.

Reviewers: arsenm, dsanders, aemerson, paquette, aditya_nandakumar

Reviewed By: arsenm

Subscribers: tpr, jvesely, wdng, nhaehnle, rovka, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78191
2020-04-15 17:19:14 +02:00
Dominik Montada 443c244cff [GlobalISel] translate freeze to new generic G_FREEZE
Summary:
As a follow up to https://reviews.llvm.org/D29014, add translation
support for freeze.

Introduce a new generic instruction G_FREEZE and translate freeze to it.

Reviewers: dsanders, aqjune, arsenm, aditya_nandakumar, t.p.northover, lebedev.ri, paquette, aemerson

Reviewed By: aqjune, arsenm

Subscribers: fhahn, lebedev.ri, wdng, rovka, hiraditya, jfb, volkan, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77795
2020-04-15 16:47:05 +02:00
jasonliu c3c67e9531 [XCOFF][AIX] Relocation support for SymB
This patch intends to provide relocation support for the expression
 contains two unpaired relocatable terms with opposite signs.

Differential Revision: https://reviews.llvm.org/D77424
2020-04-15 14:03:54 +00:00
Victor Campos d85b3877dc [CodeGen][ARM] Error when writing to specific reserved registers in inline asm
Summary:
No error or warning is emitted when specific reserved registers are
written to in inline assembly. Therefore, writes to the program counter
or to the frame pointer, for instance, were permitted, which could have
led to undesirable behaviour.

Example:
  int foo() {
    register int a __asm__("r7"); // r7 = frame-pointer in M-class ARM
    __asm__ __volatile__("mov %0, r1" : "=r"(a) : : );
    return a;
  }

In contrast, GCC issues an error in the same scenario.

This patch detects writes to specific reserved registers in inline
assembly for ARM and emits an error in such case. The detection works
for output and input operands. Clobber operands are not handled here:
they are already covered at a later point in
AsmPrinter::emitInlineAsm(const MachineInstr *MI). The registers
covered are: program counter, frame pointer and base pointer.

This is ARM only. Therefore the implementation of other targets'
counterparts remain open to do.

Reviewers: efriedma

Reviewed By: efriedma

Subscribers: kristof.beyls, hiraditya, danielkiss, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76848
2020-04-15 14:40:42 +01:00
Matt Arsenault ef2cb8db34 AMDGPU/GlobalISel: Add some artifact combiner tests 2020-04-15 09:03:07 -04:00
Jonas Paulsson 036242b868 [SystemZ] Bugfix in adjustSubwordCmp()
adjustSubwordCmp() should not optimize a load of an i1 value. This is
achieved by checking that the size and store-size of the MemoryVT are the
same.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45511.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D78187
2020-04-15 12:58:39 +02:00
Kazushi (Jam) Marukawa 7a7f223042 [VE] Update integer arithmetic instructions
Summary:
Changing all mnemonic to match assembly instructions to simplify mnemonic
naming rules.  This time update all fixed-point arithmetic instructions.
This also corrects smax/smin code generations.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D77856
2020-04-15 09:47:51 +02:00
Matt Arsenault cc149172da AMDGPU/GlobalISel: Fix selection of scalar f64 G_FABS
This wasn't covered by existing tablegen patterns, but also suffers
the same issues as G_FNEG. Workaround them by manually selecting, like
G_FNEG.
2020-04-14 22:05:22 -04:00
Matt Arsenault cb5dc3765b TableGen/GlobalISel: Fix constraining REG_SEQUENCE operands
This was hitting the default instruction constraint code which uses
the register classes in the instruction def, which REG_SEQUENCE does
not have.

Fixes not constraining the register class for AMDGPU fneg/fabs
patterns, which would fail when the use was another generic,
unconstrained instruction.

Another oddity I noticed is that the temporary registers are created
with an unnecessary, but incorrect 16-bit LLT but this shouldn't
matter.

I'm also still unclear why root and sub-instructions have to be
handled differently.
2020-04-14 22:05:22 -04:00
Hubert Tong cda006cbc7 [test][NFC] Use plain FileCheck in statepoint-stackmap-size.ll
Summary:
The test in question uses a non-portable `grep -A` option in conjunction
with `wc -l`. `FileCheck` can be used to do the check without using
these extra utilities.

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D78060
2020-04-14 20:53:41 -04:00
Eli Friedman 2876b3eef3 [SelectionDAG] Always preserve offset in MachinePointerInfo
Previously, getWithOffset() would drop the offset if the base was null.
Because of this, MachineMemOperand would return the wrong result from
getAlign() in these cases.  MachineMemOperand stores the alignment of
the pointer without the offset.

A bunch of MIR tests changed because we print the offset now.

Split off from D77687.

Differential Revision: https://reviews.llvm.org/D78049
2020-04-14 15:29:41 -07:00
Nemanja Ivanovic 42cd6bd0db [PowerPC][NFC] Remove spurious incorrect CHECKNEXT directive from test
The directive was a typo when I first wrote the test case, then
decided to use the script and the script didn't remove the line
with the typo.
2020-04-14 10:14:02 -05:00
Pierre-vh 13eb890139 [Target][ARM] Fix VPT Block Pass miscompilation
The pass was incorrectly reverting back to a "T" when something wrote
to VPR inside a "E" block. This is not the correct behaviour, the
predicate should stay the same.

Differential Revision: https://reviews.llvm.org/D77798
2020-04-14 15:16:27 +01:00
Pierre-vh 4563024356 [Target][ARM] Adding MVE VPT Optimisation Pass
Differential Revision: https://reviews.llvm.org/D76709
2020-04-14 15:16:27 +01:00
Kerry McLaughlin 36c76de678 [AArch64][SVE] Add a pass for SVE intrinsic optimisations
Summary:
Creates the SVEIntrinsicOpts pass. In this patch, the pass tries
to remove unnecessary reinterpret intrinsics which convert to
and from svbool_t (llvm.aarch64.sve.convert.[to|from].svbool)

For example, the reinterprets below are redundant:

  %1 = call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> %a)
  %2 = call <vscale x 4 x i1> @llvm.aarch64.sve.convert.from.svbool.nxv4i1(<vscale x 16 x i1> %1)

The pass also looks for ptest intrinsics and phi instructions where
the operands are being needlessly converted to and from svbool_t.

Reviewers: sdesmalen, andwar, efriedma, cameron.mcinally, c-rhodes, rengolin

Reviewed By: efriedma

Subscribers: mgorny, tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, danielkiss, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76078
2020-04-14 10:41:49 +01:00
Matt Arsenault f48fe2c36e GlobalISel: Fix casted unmerge of G_CONCAT_VECTORS
This was assuming a scalarizing unmerge, and would fail assert if the
unmerge was to smaller vector types.
2020-04-13 22:03:05 -04:00
Matt Arsenault 0ba40d4ccf AMDGPU/GlobalISel: Combines for V_CVT_F32_UBYTE[0-3]
Ports the existing DAG combines, minus the simplify demanded bits
which seems to have no equivalent now. Without these, this isn't
particularly helpful in most of the IR sample cases.
2020-04-13 19:18:19 -04:00
Austin Kerbow a69b3e010c [AMDGPU][GlobalISel] Fix div_scale in FDIV lowering
Differential Revision: https://reviews.llvm.org/D78004
2020-04-13 15:54:49 -07:00
Heejin Ahn ba40896f99 [WebAssembly] Fix try placement in fixing unwind mismatches
Summary:
In CFGStackify, `fixUnwindMismatches` function fixes unwind destination
mismatches created by `try` marker placement. For example,
```
try
  ...
  call @qux  ;; This should throw to the caller!
catch
  ...
end
```
When `call @qux` is supposed to throw to the caller, it is possible that
it is wrapped inside a `catch` so in case it throws it ends up unwinding
there incorrectly. (Also it is possible `call @qux` is supposed to
unwind to another `catch` within the same function.)

To fix this, we wrap this inner `call @qux` with a nested
`try`-`catch`-`end` sequence, and within the nested `catch` body, branch
to the right destination:
```
block $l0
  try
    ...
    try                 ;; new nested try
      call @qux
    catch               ;; new nested catch
      local.set n       ;; store exnref to a local
      br $l0
    end
  catch
    ...
  end
end
local.get n             ;; retrieve exnref back
rethrow                 ;; rethrow to the caller
```

The previous algorithm placed the nested `try` right before the `call`.
But it is possible that there are stackified instructions before the
call from which the call takes arguments.
```
try
  ...
  i32.const 5
  call @qux  ;; This should throw to the caller!
catch
  ...
end
```

In this case we have to place `try` before those stackified
instructions.
```
block $l0
  try
    ...
    try                 ;; this should go *before* 'i32.const 5'
      i32.const 5
      call @qux
    catch
      local.set n
      br $l0
    end
  catch
    ...
  end
end
local.get n
rethrow
```

We correctly handle this in the first normal `try` placement phase
(`placeTryMarker` function), but failed to handle this in this
`fixUnwindMismatches`.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77950
2020-04-13 15:50:01 -07:00
Eli Friedman 89e0662dee Make IRBuilder automatically set alignment on load/store/alloca.
This is equivalent in terms of LLVM IR semantics, but we want to
transition away from using MaybeAlign to represent the alignment of
these instructions.

Differential Revision: https://reviews.llvm.org/D77984
2020-04-13 13:43:14 -07:00
Rahman Lavaee 05192e585c Extend BasicBlock sections to allow specifying clusters of basic blocks in the same section.
Differential Revision: https://reviews.llvm.org/D76954
2020-04-13 12:19:59 -07:00
Rahman Lavaee 4ddf7ab454 Revert "Extend BasicBlock sections to allow specifying clusters of basic blocks"
This reverts commit 0d4ec16d3d Because
tests were not added to the commit.
2020-04-13 12:19:59 -07:00
Rahman Lavaee 0d4ec16d3d Extend BasicBlock sections to allow specifying clusters of basic blocks
in the same section.

This allows specifying BasicBlock clusters like the following example:
!foo
!!0 1 2
!!4
This places basic blocks 0, 1, and 2 in one section in this order, and
places basic block #4 in a single section of its own.
2020-04-13 11:46:11 -07:00
Matt Arsenault e6605a209c DAG: Fix wrong legality check for ISD::FMAD
Since 1725f28841, this should check
isFMADLegalForFAddFSub rather than the the plain isOperationLegal.

This would assert in a subset of cases due to an oddity in how FMAD is
selected. We will allow FMA formation pre-legalize, but not FMAD even
in cases where it would be valid.

The current hook requires passing in the root fadd/fsub. However, in
this distributed case, this would be far more complicated to pass in
the relevant operand. AMDGPU doesn't get any value from the node, and
only needs the type and is the only implementor, so I'm not sure why
we have this complexity. Just rename and expand the assert to avoid
the more complicated checks spread through the distribution logic.
2020-04-13 10:25:39 -07:00
Jon Roelofs 0b0bb1969f [llvm] Fix yet more missing FileCheck colons 2020-04-13 10:49:19 -06:00
Jonathan Roelofs 17bc995388 [llvm] Fix more missing FileCheck directive colons 2020-04-13 10:16:29 -06:00
Jay Foad bc78baec4c [X86] Improve combineVectorShiftImm
Summary:
Fold (shift (shift X, C2), C1) -> (shift X, (C1 + C2)) for logical as
well as arithmetic shifts. This is needed to prevent regressions from
an upcoming funnel shift expansion change.

While we're here, fold (VSRAI -1, C) -> -1 too.

Reviewers: RKSimon, craig.topper

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77300
2020-04-13 15:54:55 +01:00
Simon Pilgrim 401cbe373b [X86][AVX] Attempt to scale masked shuffles to match the root type
Improve the chances of folding the writemask into the combined shuffle by scaling a wider shuffle mask to match the root's original type.

This creates a few minor issues with variable shuffles, preventing combines of shuffles because of the more limited support binary shuffle types. In most cases we're probably better off combining the shuffles and losing the writemask fold, but this isn't always going to be true.
2020-04-13 14:57:25 +01:00
Simon Pilgrim ec938c2a83 [X86][AVX] Add some masked variable shuffle tests
Now that's D77928 landed we need to try harder to match shuffle and mask widths. This is a couple of tests showing where variable shuffle masks have been widened preventing them from folding with the mask.
2020-04-13 14:32:29 +01:00
Austin Kerbow eab9a4f119 [AMDGPU] Don't assert on partial exec copy
After Machine CSE and coalescing we can end up with copies of exec to
subregister SGPRs.

Differential Revision: https://reviews.llvm.org/D77992
2020-04-12 21:14:36 -07:00
Kang Zhang aa081721d4 [NFC][PowerPC] Add a new test case early-ret-verify.mir 2020-04-13 03:48:35 +00:00
Craig Topper 42fc7852f5 [X86] Print k-mask in FMA3 comments. 2020-04-12 13:16:53 -07:00
Jonathan Roelofs 41f13f1f64 reland: [DAG] Fix PR45049: LegalizeTypes crash
Sometimes LegalizeTypes knows about common subexpressions before SelectionDAG
does, leading to accidental SDValue removal before its reference count was
truly zero.

Differential Revision: https://reviews.llvm.org/D76994

Reviewed-By: bjope

Fixes: https://bugs.llvm.org/show_bug.cgi?id=45049

Reverted in 3ce77142a6 because the previous patch
broke the expensive-checks bots. The new patch removes the broken check.
2020-04-12 09:52:17 -06:00
Sanjay Patel d04db4825a [x86] use vector instructions to lower FP->int->FP casts
As discussed in PR36617:
https://bugs.llvm.org/show_bug.cgi?id=36617#c13
...we can avoid the likely slow round-trip from XMM to GPR to XMM
by using the vector versions of the convert instructions.

Based on experimental results from recent Intel/AMD chips, we don't
need to worry about triggering denorm stalls while operating on
garbage data in the high lanes with convert instructions, so this is
expected to always be as good or better perf than the scalar
instruction equivalent. FP exceptions are also not a concern because
strict code should not be using the regular SDAG opcodes.

Differential Revision: https://reviews.llvm.org/D77895
2020-04-12 10:26:43 -04:00
Craig Topper d3465e0691 [X86] Enable shuffle combining for AVX512 unless the root is used by a vselect
A lot of vectorized code doesn't use masks so we shouldn't penalize them by not doing shuffle combining on avx512 targets.

I've added support for VALIGNQ/VALIGND and 512-bit SHUF128 to prevent some regressions. I also prevented recombining 256-bit SHUF128 to PERM2X128. We may not need to add 256-bit SHUF128 support, but I don't think I found any cases requiring that in my testing.

Differential Revision: https://reviews.llvm.org/D77928
2020-04-11 20:05:10 -07:00
Matt Arsenault 96819011ca AMDGPU/GlobalISel: Fix RegBankSelect for v2s16 shifts
These need to be promoted and scalarized for the SALU.
2020-04-11 20:55:33 -04:00
Matt Arsenault ac8d51a3c6 AMDGPU/GlobalISel: Legalize 16-bit shift amounts to s16
The current selector depends on 16-bit shifts using 16-bit shift
amount types, but really it should accept either for all types.
2020-04-11 18:12:26 -04:00
Matt Arsenault c5497e5399 AMDGPU/GlobalISel: Fix legalizing <3 x s16> vselects 2020-04-11 15:59:51 -04:00
Hongtao Yu 11455a7905 [CodeGen] Allow partial tail duplication in Machine Block Placement.
Summary: A count profile may affect tail duplication's heuristic causing a block to be duplicated in only a part of its predecessors. This is not allowed in the Machine Block Placement pass where an assert will go off. I'm removing the assert and making the optimization bail out when such case happens.

Reviewers: wenlei, davidxl, Carrot

Reviewed By: Carrot

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77748
2020-04-11 12:20:31 -07:00
Matt Arsenault cf29333f40 AMDGPU/GlobalISel: Work around forming illegal zextload after legalize
Selection would fail after the post legalize combiner put an illegal
zextload back together.

The base combiner has parameter to only allow legal operations, but
they appear to not be used. I also don't see a nice way to remove a
single entry from all_combines, so just hack around this.
2020-04-11 10:52:58 -04:00