Commit Graph

57472 Commits

Author SHA1 Message Date
Matt Arsenault 78a43f10c7 AMDGPU: Don't assert on unknown address spaces
Assume unknown address spaces behave like some flavor of global
memory.
2020-05-08 12:57:27 -04:00
Matt Arsenault fda0c8df28 AMDGPU: Lower addrspacecast to 32-bit constant
Somehow this was missing from the DAG path, but not global isel.
2020-05-08 10:46:00 -04:00
Simon Pilgrim 5f9f37c42a [X86][AVX] Don't let X86ISD::BROADCAST peek through bitcasts to illegal types.
This was an existing bug exposed by the more aggressive X86ISD::BROADCAST generation by rG8817334ce3c7

Original test case thanks to @mstorsjo
2020-05-08 12:30:50 +01:00
Nikita Popov 5fa87ec004 [AMDGPU] Try to determine sign bit during div/rem expansion
This is preparation for D79294, which removes an expensive
InstSimplify optimization, on the assumption that it will be
picked up by InstCombine instead. Of course, this does not hold
up if a backend performs non-trivial IR expansions without running
a canonicalization pipeline afterwards, which turned up as an
issue in the context of AMDGPU div/rem expansion.

This patch mitigates the issue by explicitly performing a known
bits calculation where it matters. No test changes, as those would
only be visible after the other patch lands.

Differential Revision: https://reviews.llvm.org/D79596
2020-05-08 10:11:26 +02:00
Simon Pilgrim b8a725274c [X86][AVX] combineSignExtendInReg - promote mask arithmetic before v4i64 canonicalization
We rely on the combine

(sext_in_reg (v4i64 a/sext (v4i32 x)), v4i1) -> (v4i64 sext (v4i32 sext_in_reg (v4i32 x, ExtraVT)))

to avoid complex v4i64 ashr codegen, but doing so prevents v4i64 comparison mask promotion, so ensure we attempt to promote before canonicalizing the (hopefully now redundant sext_in_reg).

Helps with the poor codegen in PR45808.
2020-05-07 13:16:36 +01:00
Anna Welker 1e413a8c36 [ARM][MVE] Add support for incrementing gathers
Enables the MVEGatherScatterLowering pass to build
pre-incrementing gathers. Incrementing writeback gathers
are built when it is possible to replace the loop increment
instruction.

Differential Revision: https://reviews.llvm.org/D76786
2020-05-07 12:33:50 +01:00
Kazushi (Jam) Marukawa 447efdb52b [VE] Minimum MC layer for VE (2/4)
Remove unnecessary EncoderMethod and DecoderMethod which cause errors in
supporting MC layer.

Differential Revision: https://reviews.llvm.org/D79544
2020-05-07 13:21:37 +02:00
Kazushi (Jam) Marukawa 6999ffcc39 [VE] Implements minimum MC layer for VE (1/4)
Summary:
Correct instruction bitfield addresses to generate machine code correctly.  Also
add some variables to represent all instructions correctly and change default
values to use registers by default.

Differential Revision: https://reviews.llvm.org/D79539
2020-05-07 13:10:36 +02:00
Kerry McLaughlin 3bcd3dd473 [CodeGen][SVE] Lowering of shift operations with scalable types
Summary:
Adds AArch64ISD nodes for:
 - SHL_PRED (logical shift left)
 - SHR_PRED (logical shift right)
 - SRA_PRED (arithmetic shift right)

Existing patterns for unpredicated left shift by immediate
have also been moved into the appropriate multiclasses
in SVEInstrFormats.td.

Reviewers: sdesmalen, efriedma, ctetreau, huihuiz, rengolin

Reviewed By: efriedma

Subscribers: huihuiz, tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79478
2020-05-07 11:43:49 +01:00
Carl Ritson e3ffe7269b [AMDGPU] Cluster shader exports
Summary:
Add DAG scheduling mutation to cluster export instructions.
This avoids unnecessary waitcnts being added when computation
ends up interspersed with exports.

Reviewers: foad, arsenm, rampitec, nhaehnle

Reviewed By: foad

Subscribers: kzhuravl, jvesely, wdng, mgorny, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79481
2020-05-07 19:05:38 +09:00
Craig Topper 350645594e [X86] Enable combinePMULH to match multiplies with elements larger than i32.
We're truncating so the extra bits will be discarded.
2020-05-06 23:13:59 -07:00
Eli Friedman 2c8546107a [AArch64][SVE] Implement lowering for SIGN_EXTEND etc. of SVE predicates.
Now using patterns, since there's a single-instruction lowering. (We
could convert to VSELECT and pattern-match that, but there doesn't seem
to be much point.)

I think this might be the first instruction to use nested multiclasses
this way? It seems like a good way to reduce duplication between
different integer widths. Let me know if it seems like an improvement.

Also, while I'm here, fix the return type of SETCC so we don't try to
merge a sign-extend with a SETCC.

Differential Revision: https://reviews.llvm.org/D79193
2020-05-06 17:56:32 -07:00
Craig Topper 16c800b8b7 [X86] Remove support for Y0 constraint as an alias for Yz in inline assembly.
Neither gcc or icc support this. Split out from D79472. I want
to remove more, but it looks like icc does support some things
gcc doesn't and I need to double check our internal test suites.
2020-05-06 14:58:53 -07:00
Craig Topper 9bb9ff0957 [X86] Remove incomplete support for 'Y' has an inline assembly constraint by itself.
Y is the start of several 2 letter constraints, but we also had
partial support to recognize it by itself. But it doesn't look
like it can get through clang as a single letter so the backend
support for this was effectively dead.
2020-05-06 14:23:04 -07:00
Ulrich Weigand 947f78ac27 [SystemZ] Fix/optimize vec_load_len and related intrinsics
When using vec_load/store_len_r with an immediate length operand
of 16 or larger, LLVM will currently emit an VLRL/VSTRL instruction
with that immediate.  This creates a valid encoding (which should be
supported by the assembler), but always traps at runtime.  This patch
fixes this by not creating VLRL/VSTRL in those cases.

This would result in loading the length into a register and
calling VLRLR/VSTRLR instead.  However, these operations with
a length of 15 or larger are in fact simply equivalent to a
full vector load or store.  And in fact the same holds true for
vec_load/store_len as well.

Therefore, add a DAGCombine rule to replace those operations with
plain vector loads or stores if the length is known at compile
time and equal or larger to 15.
2020-05-06 21:15:58 +02:00
Michael Liao 4ee5a04187 [amdgpu] Fix check of VCC.
Summary: - Need to include checking on the new 16-bit subregs.

Reviewers: rampitec

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79498
2020-05-06 14:16:37 -04:00
Simon Pilgrim 8817334ce3 [X86] getShuffleScalarElt - add CONCAT_VECTORS/INSERT_VECTOR_ELT support.
This helped fix some i686 vXi64 broadcast folds that were becoming v2Xi32 broadcasts because we didn't match the broadcast until after SimplifyDemandedBits worked out we only used the bottom 32-bits in PMUL(U)DQ and type legalization had split the original i64 load.

A couple of regressions occurred which required some fixups - adding concat_vectors(broadcast_load,broadcast_load) splat support and recognising (unnecessary) unary shuffles of already broadcasted vectors.

This came about as part of the work investigating vector load combining from shuffles for PR42550.
2020-05-06 18:13:33 +01:00
Simon Pilgrim 8c71c2291e [X86] getShuffleScalarElt - consistently use SDValue. NFC.
We never need to call this from anything but ISD::SHUFFLE_VECTOR or target shuffles so shouldn't need to address SDNode directly.
2020-05-06 18:13:33 +01:00
Stanislav Mekhanoshin 54d6dfe996 [AMDGPU] Drop 16 bit subreg suffixes on print
We do not want to break asm syntax. These suffixes are
quite useful for debugging, so add an option to print
them. Right now it is NFC.

Differential Revision: https://reviews.llvm.org/D79435
2020-05-06 08:14:10 -07:00
Jay Foad 29067aac46 [AMDGPU] Don't implement GCNHazardRecognizer::PreEmitNoops(SUnit *)
When called from the post-RA scheduler, hazards have already been
handled by getHazardType returning NoopHazard, so PreEmitNoops always
returns zero. Remove it. NFC.

Historical note: PreEmitNoops was added to the hazard recognizer
interface as an optional feature to support dispatch group formation on
the POWER target:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20131202/197470.html
So it seems right that we shouldn't need to implement it.

We do still implement the other overload PreEmitNoops(MachineInstr *)
because that is used by the PostRAHazardRecognizer pass.

Differential Revision: https://reviews.llvm.org/D79476
2020-05-06 16:11:19 +01:00
David Green f5f83cf4df [ARM] VMOVhr load -> vldr
Much like the similar combine added recently for VMOVrh load, this
adds a fold for VMOVhr load turning it into a vldr.f16 as opposed to a
vldrh and vmov.f16.

Differential Revision: https://reviews.llvm.org/D78714
2020-05-06 15:45:56 +01:00
Ram Nalamothu f7060f4f88 For PAL, make sure Scratch Buffer Descriptor do not clobber GIT pointer
Since SRSRC has alignment requirements, first find non GIT pointer clobbered
registers for SRSRC and then if those registers clobber preloaded Scratch Wave
Offset register, copy the Scratch Wave Offset register to a free SGPR.
2020-05-06 10:31:15 -04:00
Simon Pilgrim f5f7fd990e [X86][SSE] combineX86ShuffleChain - remove unused shuffle(vzext_load(),undef) combine.
This should always be caught by the various VZEXT_MOVL handling in combineTargetShuffle and SimplifyDemandedVectorEltsForTargetNode.
2020-05-06 15:20:29 +01:00
Matt Arsenault 074c371a48 AMDGPU: Insert kernarg code after allocas
This produces more normal looking IR by keeping all the allocas
clustered at the start of the block.
2020-05-06 10:19:56 -04:00
David Green d05f8a38c5 [ARM] VMOVrh of VMOVhr
A VMOVhr of a VMOVrh can be simply folded to the original HPR value.

Differential Revision: https://reviews.llvm.org/D78710
2020-05-06 15:10:01 +01:00
David Green a349949f8a [ARM] Extract from a VDUP
If we get into the situation where we are extracting from a VDUP, the
extracted value is just the origin, so long as the types match or we can
bitcast between the two.

Differential Revision: https://reviews.llvm.org/D78708
2020-05-06 14:51:25 +01:00
David Green ed7db68c35 [ARM] Convert a bitcast VDUP to a VDUP
The idea, under MVE, is to introduce more bitcasts around VDUP's in an
attempt to get the type correct across basic block boundaries. In order
to do that without other regressions we need a few fixups, of which this
is the first. If the code is a bitcast of a VDUP, we can convert that
straight into a VDUP of the new type, so long as they have the same
size.

Differential Revision: https://reviews.llvm.org/D78706
2020-05-06 14:14:21 +01:00
Simon Pilgrim 8650b36935 [X86][SSE] Move VZEXT_MOVL removal into SimplifyDemandedVectorEltsForTargetNode
This patch replaces the VZEXT_MOVL removal from combineShuffle with a more general version based in SimplifyDemandedVectorEltsForTargetNode.

By using computeKnownBits we can always remove the VZEXT_MOVL if the upper elements of the source operand are known to be zero.

This requires us to add the conversion ops to computeKnownBitsForTargetNode as well.

Reviewed By: @craig.topper

Differential Revision: https://reviews.llvm.org/D79335
2020-05-06 14:05:07 +01:00
Simon Pilgrim 1c4f118d89 [X86][SSE] getShuffleScalarElt - minor NFC cleanup.
Use SelectionDAG::MaxRecursionDepth instead of (equal) hard coded constant.

clang-format
2020-05-06 14:05:07 +01:00
Dmitry Preobrazhensky 5998baccb9 [AMDGPU][MC][GFX9+] Enabled 21-bit signed offsets for SMEM instructions
Reviewers: arsenm, rampitec

Differential Revision: https://reviews.llvm.org/D79288
2020-05-06 14:13:10 +03:00
Craig Topper b55009df66 [X86] Add v32i16/v64i8 into the handling for 512-bit inline assembly constraints. 2020-05-05 21:41:31 -07:00
Craig Topper 0fac1c1912 [X86] Allow Yz inline assembly constraint to choose ymm0 or zmm0 when avx/avx512 are enabled and type is 256 or 512 bits
gcc supports selecting ymm0/zmm0 for the Yz constraint when used with 256 or 512 bit vector types.

Fixes PR45806

Differential Revision: https://reviews.llvm.org/D79448
2020-05-05 21:12:30 -07:00
Jessica Paquette b1b86d1c28 [AArch64][GlobalISel] Fold shifts into G_ICMP
Since G_ICMP can be selected to a SUBS, we can fold shifts into such compares.

E.g.

```
cmp	w1, w0, lsl #3
cmp	w1, w0, lsr #3
cmp	w1, w0, asr #3
```

This is done the same way as for adds and subtracts, using
`selectShiftedRegister`.

This gives some minor code size savings on CTMark.

https://reviews.llvm.org/D79365
2020-05-05 18:35:39 -07:00
Craig Topper a4286fc952 [X86] Fix usage of Align constructing MachineMemOperands.
Similar to D77687, but for the X86 specific code.

Differential Revision: https://reviews.llvm.org/D79381
2020-05-05 15:25:02 -07:00
Momchil Velikov fb18dffaeb Revert "[ARM] CMSE code generation"
This reverts commit 7cbbf89d23.

The regression tests fail with the expensive checks.
2020-05-05 19:05:40 +01:00
David Blaikie 025cd300cd Collapse variable into assert to remove non-assert unused variable 2020-05-05 11:04:43 -07:00
Christudasan Devadasan 375cec4b6c [AMDGPU] Introduce more scratch registers in the ABI.
The AMDGPU target has a convention that defined all VGPRs
(execept the initial 32 argument registers) as callee-saved.
This convention is not efficient always, esp. when the callee
requiring more registers, ended up emitting a large number of
spills, even though its caller requires only a few.

This patch revises the ABI by introducing more scratch registers
that a callee can freely use.
The 256 vgpr registers now become:
  32 argument registers
  112 scratch registers and
  112 callee saved registers.
The scratch registers and the CSRs are intermixed at regular
intervals (a split boundary of 8) to obtain a better occupancy.

Reviewers: arsenm, t-tye, rampitec, b-sumner, mjbedy, tpr

Reviewed By: arsenm, t-tye

Differential Revision: https://reviews.llvm.org/D76356
2020-05-05 23:02:58 +05:30
Momchil Velikov 7cbbf89d23 [ARM] CMSE code generation
This patch implements the final bits of CMSE code generation:

* emit special linker symbols

* restrict parameter passing to not use memory

* emit BXNS and BLXNS instructions for returns from non-secure entry
  functions, and non-secure function calls, respectively

* emit code to save/restore secure floating-point state around calls
  to non-secure functions

* emit code to save/restore non-secure floating-pointy state upon
  entry to non-secure entry function, and return to non-secure state

* emit code to clobber registers not used for arguments and returns
  when switching to no-secure state

Patch by Momchil Velikov, Bradley Smith, Javed Absar, David Green,
possibly others.

Differential Revision: https://reviews.llvm.org/D76518
2020-05-05 18:23:28 +01:00
Stanislav Mekhanoshin 9ef166e657 [AMDGPU] Fix FoldImmediate for 16 bit operand
Differential Revision: https://reviews.llvm.org/D79362
2020-05-05 10:19:14 -07:00
Simon Pilgrim 4e3c005554 [TTI] getScalarizationOverhead - use explicit VectorType operand
getScalarizationOverhead is only ever called with vectors (and we already had a load of cast<VectorType> calls immediately inside the functions).

Followup to D78357

Reviewed By: @samparker

Differential Revision: https://reviews.llvm.org/D79341
2020-05-05 16:59:23 +01:00
Pengxuan Zheng 85aff8a4e4 [RISCV] Update debug scratch register names
Summary:
The RISC-V debug register was named dscratch in a previous draft of the RISC-V
debug mode spec. The number of registers has been increased to 2 in the latest
ratified version of the debug mode spec and the registers were named dscratch0
and dscratch1. We still support using the old register name "dscratch", but it
would be disassembled as "dscratch0" with this change.

Reviewers: apazos, asb, lenary, luismarques

Reviewed By: asb

Subscribers: hiraditya, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, sameer.abuasal, evandro, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78764
2020-05-05 08:46:07 -07:00
Jay Foad 3d76824b7f [AMDGPU] Better support for VMEM soft clauses in GCNHazardRecognizer
VMEM soft clauses only contain VMEM and FLAT instructions. Teaching
GCNHazardRecognizer::checkSoftClauseHazards that other kinds of
instructions will naturally break the clause means there are far fewer
cases where it has to insert an s_nop instruction to forcibly break the
clause.

Differential Revision: https://reviews.llvm.org/D79353
2020-05-05 15:49:09 +01:00
Sebastian Neubauer 1de4e56933 [AMDGPU] Don't mark the .note section as ALLOC
Marking a section as ALLOC tells the ELF loader to load the section into memory.
As we do not want to load the notes into VRAM, the flag should not be there.

On AMDHSA, .note is still marked as ALLOC, apparently this is currently
needed for OpenCL (see https://reviews.llvm.org/D74995).

Differential Revision: https://reviews.llvm.org/D76278
2020-05-05 14:21:45 +02:00
Sander de Smalen 8cb5663abd [AArch64][SVE] Guard bitcast patterns under IsLE predicate
Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79352
2020-05-05 13:18:35 +01:00
David Green f85acb1915 [ARM] Correct the type on a predicate cast
A PREDICATE_CAST(PREDICATE_CAST(X)) can be converted to a
PREDICATE_CAST(X) as the operation can convert between any forms of
predicates (v4i1/v8i1/v16i1/i32). Unfortunately I got the type wrong on
one of the rarer converts, which would lead to invalid nodes during
isel. This fixes it up to use the correct type.

Differential Revision: https://reviews.llvm.org/D79402
2020-05-05 13:15:10 +01:00
Simon Pilgrim e53d4869a0 [X86][AVX] combineVectorSignBitsTruncation - avoid complex vXi64->vXi32 PACKSS truncations (PR45794)
Unless we're truncating an 'all-bits' result, using PACKSS for vXi64->vXi32 truncation causes problems with later combines as ComputeNumSignBits struggles to see through BITCASTs to smaller types. If we don't use PACKSS in these cases then we fallback to shuffles which are usually just as good.
2020-05-05 11:57:25 +01:00
Sam Parker 40574fefe9 [NFC][CostModel] Add TargetCostKind to relevant APIs
Make the kind of cost explicit throughout the cost model which,
apart from making the cost clear, will allow the generic parts to
calculate better costs. It will also allow some backends to
approximate and correlate the different costs if they wish. Another
benefit is that it will also help simplify the cost model around
immediate and intrinsic costs, where we currently have multiple APIs.

RFC thread:
http://lists.llvm.org/pipermail/llvm-dev/2020-April/141263.html

Differential Revision: https://reviews.llvm.org/D79002
2020-05-05 10:35:54 +01:00
Heejin Ahn 834debfffd [WebAssembly] Fix block marker placing after fixUnwindMismatches
Summary:
This fixes a few things that are connected. It is very hard to provide
an independent test case for each of those fixes, because they are
interconnected and sometimes one masks another. The provided test case
triggers some of those bugs below but not all.

---

1. Background:
`placeBlockMarker` takes a BB, and if the BB is a destination of some
branch, it places `end_block` marker there, and computes the nearest
common dominator of all predecessors (what we call 'header') and places
a `block` marker there.

When we first place markers, we traverse BBs from top to bottom. For
example, when there are 5 BBs A, B, C, D, and E and B, D, and E are
branch destinations, if mark the BB given to `placeBlockMarker` with `*`
and draw a rectangle representing the border of `block` and `end_block`
markers, the process is going to look like
```
                       -------
           -----       |-----|
 ---       |---|       ||---||
 |A|       ||A||       |||A|||
 ---  -->  |---|  -->  ||---||
 *B        | B |       || B ||
  C        | C |       || C ||
  D        -----       |-----|
  E         *D         |  D  |
             E         -------
                         *E
```
which means when we first place markers, we go from inner to outer
scopes. So when we place a `block` marker, if the header already
contains other `block` or `try` marker, it has to belong to an inner
scope, so the existing `block`/`try` markers should go _after_ the new
marker. This was the assumption we had.

But after placing all markers we run `fixUnwindMismatches` function.
There we do some control flow transformation and create some branches,
and we call `placeBlockMarker` again to place `block`/`end_block`
markers for those newly created branches. We can't assume that we are
traversing branch destination BBs from top to bottom now because we are
basically inserting some new markers in the middle of existing markers.

Fix:
In `placeBlockMarker`, we don't have the assumption that the BB given is
in the order of top to bottom, and when placing `block` markers,
calculates whether existing `block` or `try` markers are inner or
outer scopes with respect to the current scope.

---

2. Background:
In `fixUnwindMismatches`, when there is a call whose correct unwind
destination mismatches the current destination after initially placing
`try` markers, we wrap that with a new nested `try`/`catch`/`end` and
jump to the correct handler within the new `catch`. The correct handler
code is split as a separate BB from its original EH pad so it can be
branched to. Here's an example:

- Before
```
mbb:
  call @foo       <- Unwind destination mismatch!
wrong-ehpad:
  catch
  ...
cont:
  end_try
  ...
correct-ehpad:
  catch
  [handler code]
```

- After
```
mbb:
  try                (new)
  call @foo
nested-ehpad:        (new)
  catch              (new)
  local.set n / drop (new)
  br %handleri       (new)
nested-end:          (new)
  end_try            (new)
wrong-ehpad:
  catch
  ...
cont:
  end_try
  ...
correct-ehpad:
  catch
  local.set n / drop (new)
handler:             (new)
  end_try
  [handler code]
```

Note that after this transformation, it is possible there are no calls
to actually unwind to `correct-ehpad` here. `call @foo` now
branches to `handler`, and there can be no other calls to unwind to
`correct-ehpad`. In this case `correct-ehpad` does not have any
predecessors anymore.

This can cause a bug in `placeBlockMarker`, because we may need to place
`end_block` marker in `handler`, and `placeBlockMarker` computes the
nearest common dominator of all predecessors. If one of `handler`'s
predecessor (here `correct-ehpad`) does not have any predecessors, i.e.,
no way of reaching it, we cannot correctly compute the common dominator
of predecessors of `handler`, and end up placing no `block`/`end`
markers. This bug actually sometimes masks the bug 1.

Fix:
When we have an EH pad that does not have any predecessors after this
transformation, deletes all its successors, so that its successors don't
have any dangling predecessors.

---

3. Background:
Actually the `handler` BB in the example shown in bug 2 doesn't need
`end_block` marker, despite it being a new branch destination, because
it already has `end_try` marker which can serve the same purpose. I just
put that example there for an illustration purpose. There is a case we
actually need to place `end_block` marker: when the branch dest is the
appendix BB. The appendix BB is created when there is a call that is
supposed to unwind to the caller ends up unwinding to a wrong EH pad. In
this case we also wrap the call with a nested `try`/`catch`/`end`,
create an 'appendix' BB at the very end of the function, and branch to
that BB, where we rethrow the exception to the caller.

Fix:
When we don't actually need to place block markers, we don't.

---

4. In case we fall through to the continuation BB after the catch block,
after extracting handler code in `fixUnwindMismatches` (refer to bug 2
for an example), we now have to add a branch to it to bypass the
handler.
- Before
```
try
  ...
  (falls through to 'cont')
catch
  handler body
end
              <-- cont
```

- After
```
try
  ...
  br %cont    (new)
catch
end
handler body
              <-- cont
```

The problem is, we haven't been placing a new `end_block` marker in the
`cont` BB in this case. We should, and this fixes it. But it is hard to
provide a test case that triggers this bug, because the current
compilation pipeline from .ll to .s does not generate this kind of code;
we always have a `br` after `invoke`. But code without `br` is still
valid, and we can have that kind of code if we have some pipeline
changes or optimizations later. Even mir test cases cannot trigger this
part for now, because we don't encode auxiliary EH-related data
structures (such as `WasmEHFuncInfo`) in mir now. Those functionalities
can be added later, but I don't think we should block this fix on that.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79324
2020-05-05 02:06:47 -07:00
Pierre-vh d5eb7ffa33 [Target][ARM] Fold or(A, B) more aggressively for I1 vectors
This patch makes the folding of or(A, B) into not(and(not(A), not(B)))
more agressive for I1 vector. This only affects Thumb2 MVE and improves
codegen, because it removes a lot of msr/mrs instructions on VPR.P0.

This patch also adds a xor(vcmp) -> !vcmp fold for MVE.

Differential Revision: https://reviews.llvm.org/D77202
2020-05-05 10:03:02 +01:00
Pierre-vh ffdda495f7 [Target][ARM] Add PerformVSELECTCombine for MVE Integer Ops
This patch adds an implementation of PerformVSELECTCombine in the
ARM DAG Combiner that transforms vselect(not(cond), lhs, rhs) into
vselect(cond, rhs, lhs).

Normally, this should be done by the target-independent DAG Combiner,
but it doesn't handle the kind of constants that we generate, so we
have to reimplement it here.

Differential Revision: https://reviews.llvm.org/D77712
2020-05-05 10:03:02 +01:00
Eli Friedman 1eb160fe8d [ARM] Fix tail call validity checking for varargs calls.
If a varargs function is calling a non-varargs function, or vice versa,
make sure we use the correct "varargs" bit for each.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45234

Differential Revision: https://reviews.llvm.org/D79199
2020-05-04 12:34:14 -07:00
David Green de904f5325 [ARM] isHardwareLoopProfitable debug messages. NFC 2020-05-04 19:20:34 +01:00
Stanislav Mekhanoshin c85eda74b8 [AMDGPU] fix copies between 32 and 16 bit
This a hack to fix illegal 32 to 16 bit copies.
The problem is when we make 16 bit subregs legal it creates
a huge amount of failures which can only be resolved at once
without a temporary hack like this.

The next step is to change operands, instruction definitions
and patterns until this hack is not needed.

Differential Revision: https://reviews.llvm.org/D79119
2020-05-04 08:54:22 -07:00
Simon Pilgrim 4b9d75c1ac [X86][SSE] Move some VZEXT_MOVL combines into combineTargetShuffle. NFC.
Minor cleanup of combineShuffle by moving some of the low hanging fruit (load + scalar_to_vector folds).
2020-05-04 15:13:44 +01:00
alex-t 5b898bddff [AMDGPU] Enable carry out ADD/SUB operations divergence driven instruction selection.
Summary: This change enables all kind of carry out ISD opcodes to be selected according to the node divergence.

Reviewers: rampitec, arsenm, vpykhtin

Reviewed By: rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78091
2020-05-04 16:42:25 +03:00
Raul Tambre 0863e94ebd [AArch64] Add NVIDIA Carmel support
Summary:
NVIDIA's Carmel ARM64 cores are used in Tegra194 chips found in Jetson AGX Xavier, DRIVE AGX Xavier and DRIVE AGX Pegasus.

References:
* https://devblogs.nvidia.com/nvidia-jetson-agx-xavier-32-teraops-ai-robotics/#h.huq9xtg75a5e
* NVIDIA Xavier Series System-on-Chip Technical Reference Manual 1.3 (https://developer.nvidia.com/embedded/downloads#?search=Xavier%20Series%20SoC%20Technical%20Reference%20Manual)

Reviewers: sdesmalen, paquette

Reviewed By: sdesmalen

Subscribers: llvm-commits, ianshmean, kristof.beyls, hiraditya, jfb, danielkiss, cfe-commits, t.p.northover

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D77940
2020-05-04 13:52:30 +01:00
Kerry McLaughlin 19f5da9c1d [SVE][Codegen] Lower legal min & max operations
Summary:
This patch adds AArch64ISD nodes for [S|U]MIN_PRED
and [S|U]MAX_PRED, and lowers both SVE intrinsics and
IR operations for min and max to these nodes.

There are two forms of these instructions for SVE: a predicated
form and an immediate (unpredicated) form. The patterns
which existed for the latter have been updated to match a
predicated node with an immediate and map this
to the immediate instruction.

Reviewers: sdesmalen, efriedma, dancgr, rengolin

Reviewed By: efriedma

Subscribers: huihuiz, tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79087
2020-05-04 11:19:19 +01:00
Simon Moll 1e89f36c98 [VE][NFC] formatting VEISD enum 2020-05-04 09:50:27 +02:00
Craig Topper 243ffc0e65 [X86] Simplify some code in combineTruncatedArithmetic. NFC
We haven't promoted AND/OR/XOR to vXi64 types for a while. So
there's no reason to use isOperationLegalOrPromote. So we can
just use isOperationLegal by merging with ADD handling.
2020-05-03 23:53:10 -07:00
Craig Topper 8b53fdd3b6 [X86] Custom legalize v16i64->v16i8 truncate with avx512.
Default legalization will create two v8i64 truncs to v8i32, concat
them to v16i32, and then truncate the rest of the way to v16i8.

Instead we can truncate directly from v8i64 to v8i8 in the lower
half of an xmm. Then concat the two halves to use vpunpcklqdq.
This is the same number of uops, but the dependency chain through
the uops is better since the halves are merged at the end.

I had to had SimplifyDemandedBits support for VTRUNC to prevent
a regression on vector-trunc-math.ll. combineTruncatedArithmetic
no longer gets a chance to shrink vXi64 mul so we were producing
the v8i64 multiply sequence using multiple PMULUDQs. With the
demanded bits fix we are able to prune out the extra ops leaving
just two PMULUDQs, one for each v8i64 half. This is twice the
width of the 2 v8i32 PMULLDs we had before, but PMULUDQ is 1
uop and PMULLD is 2. We also save some truncates. It's probably
worth using PMULUDQ even when PMULLQ is available since the latter
is 3 uops, but that will require a different change.

Differential Revision: https://reviews.llvm.org/D79231
2020-05-03 23:26:04 -07:00
Simon Pilgrim 7c203163c7 [X86] Use splitVector helper in truncateVectorWithPACK/splitVectorStore/combineHorizontalMinMaxResult/combineReductionToHorizontal. NFC.
All these locations were performing the same type splitting/extractSubVector calls as the spltVector helper.
2020-05-03 13:40:38 +01:00
Simon Pilgrim e8d9794a23 [X86] Don't limit splitVector helper to simple types.
It can handle EVT just as well (and so can the extractSubVector calls).
2020-05-03 12:27:37 +01:00
Simon Pilgrim 74e9952c8e [X86][SSE] splitAndLowerShuffle - use splitVector helper. NFC.
The splitVector helper uses extractSubVector which splits build vectors like we do here, so avoid reimplementing it.

splitVector could easily be extended to peek through bitcasts as well but I'd prefer to keep this commit NFC.
2020-05-03 11:26:51 +01:00
Simon Pilgrim 4d2b0ebd17 [X86] detectAVGPattern - use matchUnaryPredicate helper. NFC.
Use the ISD::matchUnaryPredicate helper to check for inrange constants.
2020-05-03 11:26:51 +01:00
Sam Elliott fe4245a4c1 [RISCV] Implement convertSelectOfConstantsToMath
Summary:
The current lowering of `select` on RISC-V uses a branch instruction to load a
register with one or other value. This is inefficient, especially in the case of
small constants that can be computed easily.

By implementing the TargetLowering::convertSelectOfConstantsToMath hook, some of
the simpler cases are covered that let us avoid introducing a branch in these
cases.

Reviewed By: luismarques

Differential Revision: https://reviews.llvm.org/D79260
2020-05-02 15:05:57 +01:00
Sam Elliott a4a9a1f671 [RISCV] Add patterns for checking isnan
Summary:
This patch addresses some weird assembly sequences we were seeing during
comparing floats. In particular, comparing a float to itself tells you whether
it is NaN or not, which we were doing correctly, but with an extra unneeded
`and` instruction.

This patch specialises the existing patterns to remove the `and` instructions
when both their operands are the same.

Reviewed By: luismarques, asb

Differential Revision: https://reviews.llvm.org/D78908
2020-05-02 15:01:04 +01:00
Sam McCall d10c995b4d std::isspace -> llvm::isSpace (where locale should be ignored)
I've left out some cases where I wasn't totally sure this was right or
whether the include was ok (compiler-rt) or idiomatic (flang).
2020-05-02 15:36:04 +02:00
Thomas Lively e0f52842c8 [WebAssembly] Renumber SIMD opcodes
Summary:
As described in https://github.com/WebAssembly/simd/pull/209. This is
the final reorganization of the SIMD opcode space before
standardization. It has been landed in concert with corresponding
changes in other projects in the WebAssembly SIMD ecosystem.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79224
2020-05-01 17:20:49 -07:00
Nemanja Ivanovic 8ca2fc9993 [PowerPC] Refactor PPCInstrVSX.td
Over time, we have made many additions to this file and it has frankly become a
bit of a mess. This has led to at least one issue - we have a number of
instructions where the side effects flag should be set to false and we neglected
to do this. This patch suggests a refactoring that should make the file much
more maintainable. The file is split up into major sections and the nesting
level is reduced, predicate blocks merged, etc.

Sections:
  - Custom PPCISD node definitions
  - Predicate definitions
  - Instruction formats
  - Instruction definitions
  - Helper DAG definitions
  - Anonymous patterns
  - Instruction aliases

Differential revision: https://reviews.llvm.org/D78132
2020-05-01 19:17:39 -05:00
Craig Topper b938168aef [X86] Lower the cost of v4i64->v4i32 truncate with avx512.
We use the vpmovqd instruction which is a single uop. So
the cost should be 1.
2020-05-01 11:09:37 -07:00
Simon Pilgrim 8cbd8194c1 [X86] Improving folding of concat_vectors of subvectors from the same broadcast
Handle concat_vectors(extract_subvector(broadcast(x)), extract_subvector(broadcast(x))) -> broadcast(x)

To expose this we also need collectConcatOps to recognise the insert_subvector(x, extract_subvector(x, lo), hi) subvector splat pattern
2020-05-01 11:23:10 +01:00
Jay Foad 5f7ea85e78 [AMDGPU] Remove unnecessary s_waitcnt between VMEM loads
VMEM loads of the same type (sampler vs no sampler) are guaranteed to
write their result registers in order, so there is no need for an
s_waitcnt even if they write to overlapping vgprs.

Differential Revision: https://reviews.llvm.org/D79176
2020-05-01 10:10:23 +01:00
Craig Topper ed7479b635 [X86] Update type actions for ISD::TRUNCATE with avx512f to be Legal when possible. NFCI
The Custom handler wasn't doing anything for these cases anyway.
2020-04-30 23:27:29 -07:00
Suyog Sarda ea093f6481 Handle cases for subregisters.
While restoring latency, check if any of the registers of
source instruction is a subregister of the successor instructions
apart from being same register.
2020-04-30 20:32:33 -05:00
Hubert Tong a3515ab8af [MC][Target][XCOFF] Consolidate MCAsmInfo XCOFF defaults; NFC
The setting of `MCAsmInfo` properties for XCOFF got split between
`MCAsmInfoXCOFF` and `PPCXCOFFMCAsmInfo`. Except for the properties that
are dependent on the target information being passed via the
constructor, the properties being set in `PPCXCOFFMCAsmInfo` had no
fundamental reason for being treated as specific for XCOFF on PowerPC.
Indeed, the property that might be considered more specific to PowerPC,
`NeedsFunctionDescriptors`, was set in `MCAsmInfoXCOFF`.

XCOFF being specific to PowerPC anyway, this patch consolidates the
setting of the properties into `MCAsmInfoXCOFF` except for the cases
that are dependent on the information provided via the
`PPCXCOFFMCAsmInfo` constructor.

This patch also reorders the assignments to the fields to match the
declaration order in `MCAsmInfo`.
2020-04-30 20:48:30 -04:00
Craig Topper c5f7c039ef [X86] Add x, t and g modifiers for inline asm
This patch adds the x, t and g modifiers for inline asm from GCC. These will print a vector register as xmm*, ymm* or zmm* respectively.

I also fixed register names with modifiers with inteldialect so they are no longer printed with a leading %.

Patch by Amanieu d'Antras

Differential Revision: https://reviews.llvm.org/D78977
2020-04-30 17:45:45 -07:00
Craig Topper 6a1ad76dab [X86] Don't return true from isTruncateFree for vectors
Also fix some cost tables for vXi1 types to match the costs entries for the types they will be promoted to.

Differential Revision: https://reviews.llvm.org/D79045
2020-04-30 16:43:35 -07:00
Craig Topper ff66919020 [X86][CostModel] Bump the cost of vpermw/vpermt2b/vperm2w
vpermw is 2 uops. vpermt2b/vpermt2w are two shuffle uops and a port 015 uop. Weirdly vpermb is a single uop.

This patch bumps the cost to 2 for these operations. Maybe should go to 3 for the vpermt2*, but I've started conservative.

I've also removed a few entries that were now the same as earlier subtargets or that I didn't think we really did. Like I don't think we extend v32i8 to v32i16, shuffle, and then truncate.

Differential Revision: https://reviews.llvm.org/D79148
2020-04-30 11:32:25 -07:00
Simon Pilgrim bf468f4349 [X86][SSE] Canonicalize UNARYSHUFFLE(XOR(X,-1) -> XOR(UNARYSHUFFLE(X),-1)
This pushes the NOT pattern up the DAG to help expose it for further combines (AND->ANDN in particular).

The PSHUFD/MOVDDUP 'splat' cases are the only ones I've seen in the wild so far, we can further generalize if/when we need to.
2020-04-30 19:18:51 +01:00
Arthur Eubanks a90948fd6e [NFC] Rename *ByValOrInalloca* to *PassPointeeByValue*
Summary: In preparation for preallocated.

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79152
2020-04-30 09:42:13 -07:00
Simon Pilgrim 30211c4783 [X86] combineANDXORWithAllOnesIntoANDNP - add BROADCAST handling
Fold BROADCAST(NOT(Y)) -> BROADCAST(Y) as part of finding a NOT inversion.
2020-04-30 16:24:17 +01:00
Jay Foad 1bf7ccb706 [AMDGPU] Use int and unsigned instead of other 32-bit integer types. NFC. 2020-04-30 15:21:36 +01:00
diggerlin a2c8cd1812 [AIX] emit .extern and .weak directive linkage
SUMMARY:

emit .extern and .weak directive linkage

Reviewers: hubert.reinterpretcast, Jason Liu
Subscribers: wuzish, nemanjai, hiraditya

Differential Revision: https://reviews.llvm.org/D76932
2020-04-30 09:54:10 -04:00
Simon Pilgrim 96238486ed [DAGCombine] Move the remaining X86 funnel shift patterns to DAGCombine
X86 matches several 'shift+xor' funnel shift patterns:

  fold (or (srl (srl x1, 1), (xor y, 31)), (shl x0, y))  -> (fshl x0, x1, y)
  fold (or (shl (shl x0, 1), (xor y, 31)), (srl x1, y))  -> (fshr x0, x1, y)
  fold (or (shl (add x0, x0), (xor y, 31)), (srl x1, y)) -> (fshr x0, x1, y)

These patterns are also what we end up with the proposed expansion changes in D77301.

This patch moves these to DAGCombine's generic MatchFunnelPosNeg.

All existing X86 test cases still pass, and we just have a small codegen change in pr32282.ll.

Reviewed By: @spatel

Differential Revision: https://reviews.llvm.org/D78935
2020-04-30 12:57:17 +01:00
Jay Foad 462b960de8 Fix silly mistake in 31c09d03a1 [AMDGPU] Remove WaitcntBrackets::MixedPendingEvents[]. NFC. 2020-04-30 11:41:14 +01:00
Sam Elliott 09f6b9792b [RISCV][NFC] Remove Duplicated F Extension Patterns 2020-04-30 11:35:49 +01:00
Cullen Rhodes 672b62ea21 [AArch64][SVE] Custom lowering of floating-point reductions
Summary:
This patch implements custom floating-point reduction ISD nodes that
have vector results, which are used to lower the following intrinsics:

    * llvm.aarch64.sve.fadda
    * llvm.aarch64.sve.faddv
    * llvm.aarch64.sve.fmaxv
    * llvm.aarch64.sve.fmaxnmv
    * llvm.aarch64.sve.fminv
    * llvm.aarch64.sve.fminnmv

SVE reduction instructions keep their result within a vector register,
with all other bits set to zero.

Changes in this patch were implemented by Paul Walker and Sander de
Smalen.

Reviewers: sdesmalen, efriedma, rengolin

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D78723
2020-04-30 10:18:40 +00:00
David Sherwood 058cd8c5be [CodeGen] Add support for inserting elements into scalable vectors
Summary:
This patch tries to ensure that we do something sensible when
generating code for the ISD::INSERT_VECTOR_ELT DAG node when operating
on scalable vectors. Previously we always returned 'undef' when
inserting an element into an out-of-bounds lane index, whereas now
we only do this for fixed length vectors. For scalable vectors it
is assumed that the backend will do the right thing in the same way
that we have to deal with variable lane indices.

In this patch I have permitted a few basic combinations for scalable
vector types where it makes sense, but in general avoided most cases
for now as they currently require the use of BUILD_VECTOR nodes.

This patch includes tests for all scalable vector types when inserting
into lane 0, but I've only included one or two vector types for other
cases such as variable lane inserts.

Differential Revision: https://reviews.llvm.org/D78992
2020-04-30 11:14:04 +01:00
Jay Foad 86545bf72d [AMDGPU] Simplify loops in SIInsertWaitcnts::generateWaitcntInstBefore
The loops over use operands and def operands were mostly identical.
Combine them, and likewise for load memoperands and store memoperands.
NFC.
2020-04-30 08:53:12 +01:00
Jay Foad 9f59d1931c [AMDGPU] Remove Def argument from WaitcntBrackets::getRegInterval. NFC.
It's cleaner to check this in the callers instead.
2020-04-30 08:53:12 +01:00
Fangrui Song 52eb2f65a7 [MC] Move MCInstrAnalysis::evaluateBranch to X86MCInstrAnalysis::evaluateBranch
The generic implementation is actually specific to x86. It assumes the
offset is relative to the end of the instruction and the immediate is
not scaled (which is false on most RISC).
2020-04-29 23:23:52 -07:00
Craig Topper 9d4bcc3a60 [X86] Merge the last of the useBWIRegs() section into the useAVX512Regs() section of the X86TargetLowering constructor. NFC
This section is the remnant of how this code was structured before
we made v32i16/v64i8 legal types with avx512f when not restricting
to 256 bit vectors. Now that there are just a few items left,
merge them near similar things in the other section.
2020-04-29 14:40:04 -07:00
Craig Topper cff6686532 [X86] Lower the cost of v4i64->v4i32 and v8i64->v8i32 truncate with AVX
We generate much better code these days than we used to. And we use the same sequence for AVX1 and AVX2 for these

For v4i64->v4i32 we generate:
vextractf128    xmm1, ymm0, 1
vshufps xmm0, xmm0, xmm1, 136   # xmm0 = xmm0[0,2],xmm1[0,2]

And for v8i64->v8i32 we generate:
vperm2f128      ymm2, ymm0, ymm1, 49 # ymm2 = ymm0[2,3],ymm1[2,3]
vinsertf128     ymm0, ymm0, xmm1, 1
vshufps ymm0, ymm0, ymm2, 136   # ymm0 = ymm0[0,2],ymm2[0,2],ymm0[4,6],ymm2[4,6]

Differential Revision: https://reviews.llvm.org/D79109
2020-04-29 13:21:44 -07:00
Jay Foad 31c09d03a1 [AMDGPU] Remove WaitcntBrackets::MixedPendingEvents[]. NFC.
It's trivial to derive this information from other state.
2020-04-29 19:58:19 +01:00
Jay Foad 120572072e [AMDGPU] Initialize gpr upper bounds to -1. NFC.
These upper bounds are inclusive, so -1 (rather than 0) is the natural
way to express an empty range.
2020-04-29 19:58:06 +01:00
Jay Foad 777f91f47e [AMDGPU] Simplify MergeInfo calculations. NFC.
This makes the definition and uses of NewUB more symmetrical, and makes
it clear that ScoreLBs[T] does not change.
2020-04-29 19:58:06 +01:00
Ulrich Weigand e1de2773a5 [SystemZ] Allow specifying plain register numbers in AsmParser
For compatibility with other assemblers on the platform, allow
using just plain integer register numbers in all places where a
register operand is expected.

Bug: llvm.org/PR45582
2020-04-29 20:42:30 +02:00
Ulrich Weigand 6bfde063f0 [SystemZ] Simplify register parsing in AsmParser
Remove redundant Group and Regs arguments from parseRegister
and eliminate one of its overloaded versions.

Remove redundant Regs argument from parseAddress.

NFC intended.
2020-04-29 20:42:30 +02:00
Simon Pilgrim f0903de1aa [x86] Enable bypassing 64-bit division on generic x86-64
This is currently enabled for Intel big cores from Sandy Bridge onward, as well as Atom, Silvermont, and KNL, due to 64-bit division being so slow on these cores. AMD cores can do this in hardware (use 32-bit division based on input operand width), so it's not a win there. But since the majority of x86 CPUs benefit from this optimization, and since the potential upside is significantly greater than the downside, we should enable this for the generic x86-64 target.

Patch By: @atdt

Reviewed By: @craig.topper, @RKSimon

Differential Revision: https://reviews.llvm.org/D75567
2020-04-29 16:55:48 +01:00
Victor Campos d3dc4c32af [AArch64] Remove inexistent system register ERXTS_EL1
Summary:
AArch64's system register ERXTS_EL1 is present in the backend as a
component of the Arm Reliability, Availability and Serviceability (RAS)
extension. However, it has been removed from the specification before
its final release.

This patch removes the register.

Reviewers: SjoerdMeijer, DavidSpickett

Reviewed By: DavidSpickett

Subscribers: DavidSpickett, kristof.beyls, hiraditya, danielkiss, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79007
2020-04-29 16:43:48 +01:00
Jay Foad 4649da119a [AMDGPU] Use a MapVector instead of a DenseMap and a std::vector. NFC. 2020-04-29 16:02:24 +01:00
Jay Foad 2a10957f62 [AMDGPU] Minor cleanups. NFC. 2020-04-29 16:02:24 +01:00
Simon Pilgrim 090cae8491 [TTI] Add DemandedElts to getScalarizationOverhead
The improvements to the x86 vector insert/extract element costs in D74976 resulted in the estimated costs for vector initialization and scalarization increasing higher than should be expected. This is particularly noticeable on pre-SSE4 targets where the available of legal INSERT_VECTOR_ELT ops is more limited.

This patch does 2 things:
1 - it implements X86TTIImpl::getScalarizationOverhead to more accurately represent the typical costs of a ISD::BUILD_VECTOR pattern.
2 - it adds a DemandedElts mask to getScalarizationOverhead to permit the SLP's BoUpSLP::getGatherCost to be rewritten to use it directly instead of accumulating raw vector insertion costs.

This fixes PR45418 where a v4i8 (zext'd to v4i32) was no longer vectorizing.

A future patch should extend X86TTIImpl::getScalarizationOverhead to tweak the EXTRACT_VECTOR_ELT scalarization costs as well.

Reviewed By: @craig.topper

Differential Revision: https://reviews.llvm.org/D78216
2020-04-29 12:00:38 +01:00
Jay Foad 3c1f21cdf6 [AMDGPU] Remove some redundant variables. NFC. 2020-04-29 09:24:41 +01:00
Dmitri Gribenko 1a9cc47f94 Fixed a -Wunused-variable warning in no-assert builds 2020-04-29 09:12:47 +02:00
Craig Topper 52a6d47ada [X86] Add initialize function for X86FixupSetCC so that it will show up in print-after-all. 2020-04-28 23:31:34 -07:00
David Blaikie f6d5320ebe WebAssemblyExceptionInfo::Exceptions: Use unique_ptr to simplify memory management 2020-04-28 17:33:46 -07:00
David Blaikie eadb596730 InstrCOPYReplacer::Converters: Use unique_ptr to own values to simplify memory management 2020-04-28 17:33:46 -07:00
Stanislav Mekhanoshin 26777ad7a0 [AMDGPU] Adapt GCNRegBankReassign for 16 bit subregs
It allows it not to crash and analyze 16 bit subregs if those
appear in the instructions. At the same time it does not attempt
to reassign these. It still can correctly identify register
banks to let larger registers to be reassigned.

More work will be needed here when real instructions will use
these registers and more tests as well.

Differential Revision: https://reviews.llvm.org/D78772
2020-04-28 16:16:04 -07:00
Stanislav Mekhanoshin 8a30460697 [AMDGPU] Define AGPR subregs
These are only needed as VGPR counterpart.

Differential Revision: https://reviews.llvm.org/D78597
2020-04-28 15:30:43 -07:00
Jessica Paquette e0dbeb2173 Fix buildbot after 9f31446c
Add missing ifndef to make release builds happy.

Example failure: http://lab.llvm.org:8011/builders/fuchsia-x86_64-linux/builds/4006/steps/ninja-build/logs/stdio
2020-04-28 15:19:17 -07:00
Craig Topper 446a3be8f1 [X86] Add PACK instructions to hasUndefRegUpdate so the BreakFalseDeps pass will reassign an undef second source to match the first source
We generate PACK instructions with an undef second source when we are truncating from a 128-bit vector to something narrower and we don't care about the upper bits of the vector register. The register allocation process will always assign untied undef uses to xmm0. This creates a false dependency on xmm0.

By adding these instructions to hasUndefRegUpdate, we can get the BreakFalseDeps pass to reassign the source to match the other input. Normally this interface is used for instructions that might need an xor inserted to break the dependency. But the pass also has a heuristic that tries to use the same register as other sources. That should always be possible for these instructions so we'll never trigger the xor dependency break.

Differential Revision: https://reviews.llvm.org/D79032
2020-04-28 15:11:32 -07:00
Stanislav Mekhanoshin 46a75436f8 [AMDGPU] Define special SGPR subregs
These are used in SReg_32 and when we start to use SGPR_LO16
there will be compaints that not all registers in RC support
all subreg indexes. For now it is NFC.

Unused regunits are reserved so that verifier does not complain
about missing phys reg live-ins.

Differential Revision: https://reviews.llvm.org/D78591
2020-04-28 14:57:46 -07:00
Jessica Paquette 9f31446c99 [AArch64][GlobalISel] Generalize logic for promoting copies
Generalize the 16-bit FPR to 32-bit GPR logic to work for all cases where
destination size is bigger than source size.

Also fixed CheckCopy() always returning true instead of the result of
isValidCopy().

Differential Revision: https://reviews.llvm.org/D77530

Patch by tambre (Raul Tambre)
2020-04-28 14:56:08 -07:00
Stanislav Mekhanoshin 395d93358e Revert "[AMDGPU] Define special SGPR subregs"
This reverts commit 1baaa080e0.
2020-04-28 13:53:15 -07:00
Stanislav Mekhanoshin 1baaa080e0 [AMDGPU] Define special SGPR subregs
These are used in SReg_32 and when we start to use SGPR_LO16
there will be compaints that not all registers in RC support
all subreg indexes. For now it is NFC.

Unused regunits are reserved so that verifier does not complain
about missing phys reg live-ins.

Differential Revision: https://reviews.llvm.org/D78591
2020-04-28 13:34:24 -07:00
Sean Fertile 2a3cf5e583 [PowerPC][AIX] Pass ByVal formal args that span registers and stack.
Implement passing of ByVal formal arguments when the argument is passed
partly in the argument registers, with the remainder of the argument
passed on the stack.

Differential Revision: https://reviews.llvm.org/D78515
2020-04-28 14:57:14 -04:00
Craig Topper 59b9e6fe76 [X86] Update costs for truncates from less than 128-bit vectors to vXi1 on pre-avx512 targets
vXi1 types are legalized by promoting, but the narrow vectors
are legalized by widening. This results in some truncates turning
into any_extends.
2020-04-28 11:35:41 -07:00
Jessica Paquette 2af31b3b65 [AArch64][GlobalISel] Select immediate forms of compares by wiggling constants
Similar to code in `getAArch64Cmp` in AArch64ISelLowering.

When we get a compare against a constant, sometimes, that constant isn't valid
for selecting an immediate form.

However, sometimes, you can get a valid constant by adding 1 or subtracting 1,
and updating the condition code.

This implements the following transformations when valid:

- x slt c => x sle c - 1
- x sge c => x sgt c - 1
- x ult c => x ule c - 1
- x uge c => x ugt c - 1

- x sle c => x slt c + 1
- x sgt c => s sge c + 1
- x ule c => x ult c + 1
- x ugt c => s uge c + 1

Valid meaning the constant doesn't wrap around when we fudge it, and the result
gives us a compare which can be selected into an immediate form.

This also moves `getImmedFromMO` higher up in the file so we can use it.

Differential Revision: https://reviews.llvm.org/D78769
2020-04-28 11:35:01 -07:00
Craig Topper 0de7ddbfb0 [X86] Handle more cases in combineAddOrSubToADCOrSBB.
This adds support for

X + SETAE --> sbb X, -1
X - SETAE --> adc X, -1

Fixes PR45700

Differential Revision: https://reviews.llvm.org/D78984
2020-04-28 10:39:39 -07:00
Craig Topper d42192c50f [X86][CostModel] Correct the costs for truncate to a mask register with avx512
I've modified isTruncateFree to get an accurate cost for types that need to be split. I'm planning to look into fixing it for all vectors, but need more cost cleanups first.

Differential Revision: https://reviews.llvm.org/D78973
2020-04-28 10:39:36 -07:00
Francis Visoiu Mistrih e770153865 [AArch64] Add support for -ffixed-x30
Add support for reserving LR in:

* the driver through `-ffixed-x30`
* cc1 through `-target-feature +reserve-x30`
* the backend through `-mattr=+reserve-x30`
* a subtarget feature `reserve-x30`

the same way we're doing for the other registers.
2020-04-28 08:48:28 -07:00
Nick Desaulniers 1b9fdec1f6 [TII] remove overrides of isUnpredicatedTerminator
Summary:
They all match the base implementation in
TargetInstrInfo::isUnpredicatedTerminator.

Follow up to D62749.

Reviewers: echristo, MaskRay, hfinkel

Reviewed By: echristo

Subscribers: wuzish, nemanjai, hiraditya, kbarton, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78976
2020-04-28 08:47:28 -07:00
David Green 1084b32339 [ARM] Always replace FP16 bitcasts with VMOVhr or VMOVrh
This changes the logic with lowering fp16 bitcasts to always produce
either a VMOVhr or a VMOVrh, instead of only trying to do it with
certain surrounding nodes. To perform the same optimisations demand bits
and known bits information has been added for them.

Differential Revision: https://reviews.llvm.org/D78587
2020-04-28 16:12:53 +01:00
Krzysztof Parzyszek 25a4b1904c Handle part-word LL/SC in atomic expansion pass
Differential Revision: https://reviews.llvm.org/D77213
2020-04-28 10:07:39 -05:00
Ng Zhi An 500b4ad5f4 [PowerPC] Fix downcast from nullptr for target streamer
getTargetStreamer() might return null (e.g. when running inlined-strings.ll test),
downcasting to a reference will be wrong. This is detectable with -fsanitize=null.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D78686
2020-04-28 09:20:10 +00:00
Sam Parker e9c9329aa4 [TTI] Add TargetCostKind argument to getUserCost
There are several different types of cost that TTI tries to provide
explicit information for: throughput, latency, code size along with
a vague 'intersection of code-size cost and execution cost'.

The vectorizer is a keen user of RecipThroughput and there's at least
'getInstructionThroughput' and 'getArithmeticInstrCost' designed to
help with this cost. The latency cost has a single use and a single
implementation. The intersection cost appears to cover most of the
rest of the API.

getUserCost is explicitly called from within TTI when the user has
been explicit in wanting the code size (also only one use) as well
as a few passes which are concerned with a mixture of size and/or
a relative cost. In many cases these costs are closely related, such
as when multiple instructions are required, but one evident diverging
cost in this function is for div/rem.

This patch adds an argument so that the cost required is explicit,
so that we can make the important distinction when necessary.

Differential Revision: https://reviews.llvm.org/D78635
2020-04-28 08:57:45 +01:00
Kazushi (Jam) Marukawa 3c80478d73 [VE] Update branch instructions
Summary:
Changing all mnemonic to match assembly instructions to simplify mnemonic
naming rules. This time update all branch instructions.  This also change
to use %s10 register consistently.

Differential Revision: https://reviews.llvm.org/D78889
2020-04-28 09:41:01 +02:00
Kazushi (Jam) Marukawa 0314e8980f [VE] Support floating point immediate values
Summary:
Add simm7fp/mimmfp to represent floating point immediate values.
Also clean multiclasses to define floating point arithmetic instructions
to handle simm7fp/mimmfp operands.  Also add several regression tests
for new operands.

Differential Revision: https://reviews.llvm.org/D78887
2020-04-28 09:36:10 +02:00
Chen Zheng 45d92806ea [PowerPC] use inst-level fast-math-flags to drive MachineCombiner
Currently, on PowerPC target, it uses function scope UnsafeFPMath
option to drive Machine Combiner pass.

This is not accurate in two ways:
1: the scope is not accurate. Machine Combiner pass only requires
   instruction-level flags instead of the function scope.
2: the float point flag is not accurate. Machine Combiner pass
   only requires float point flags reassoc and nsz.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D78183
2020-04-28 03:31:12 -04:00
Haojian Wu b73290be9f Fix the -Wunused-variable warning. 2020-04-28 08:44:15 +02:00
Craig Topper a58b62b4a2 [IR] Replace all uses of CallBase::getCalledValue() with getCalledOperand().
This method has been commented as deprecated for a while. Remove
it and replace all uses with the equivalent getCalledOperand().

I also made a few cleanups in here. For example, to removes use
of getElementType on a pointer when we could just use getFunctionType
from the call.

Differential Revision: https://reviews.llvm.org/D78882
2020-04-27 22:17:03 -07:00
Kang Zhang 4bb0a1cb70 [PowerPC] Fix the liveins for ppc-expand-isel pass
Summary:
In the ppc-expand-isel pass, we use stepForward() to update the
liveins, this function is not recommended, because it needs the
accurate kill info.

This patch uses the function computeAndAddLiveIns() to update the
liveins, it's the recommended method and can fix the liveins bug for
ppc-expand-isel pass..

Reviewed By: efriedma, lkail

Differential Revision: https://reviews.llvm.org/D78657
2020-04-28 03:22:48 +00:00
Nick Desaulniers bc7f3240e6 [X86] remove derived method w/ same impl as base
Summary:
While looking into issues with IfConverter, I noticed that
X86InstrInfo::isUnpredicatedTerminator matched its overriden
implementation in TargetInstrInfo::isUnpredicatedTerminator.

Reviewers: craig.topper, hfinkel, MaskRay, echristo

Reviewed By: MaskRay, echristo

Subscribers: hiraditya, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D62749
2020-04-27 17:41:00 -07:00
Craig Topper 37ec709233 [X86][CostModel] Update truncate costs for some narrow vector cases to match their wider version.
This updates v4i16->v4i8 with sse2 to match v8i16->v8i8.
Update v2i16->v2i8 and v4i16->v4i8 with sse 4.1 to match v8i16->v8i8.
2020-04-27 13:47:48 -07:00
Victor Huang 64d44ae7c2 [PowerPC][Future] Remove "unskipableSimplifyCode()" in PPCMIPeephole.cpp
"unskipableSimplifyCode()" was added to handle unsafe BL8_NOTOC instruction
when TOC was not completely removed. The function is not needed after confirming
TOC pointer is not used in a function that uses PC-Relative addressing.

Differential Revision: https://reviews.llvm.org/D78517
2020-04-27 14:57:02 -05:00
Wei Mi 68d2301e12 Recommit "Generate Callee Saved Register (CSR) related cfi directives
like .cfi_restore"

Insert .cfi_offset/.cfi_register when IncomingCSRSaved of current block
is larger than OutgoingCSRSaved of its previous block.

Original commit message:
https://reviews.llvm.org/D42848 only handled CFA related cfi directives but
didn't handle CSR related cfi. The patch adds the CSR part. Basically it reuses
the framework created in D42848. For each basicblock, the patch tracks which
CSR set have been saved at its CFG predecessors's exits, and compare the CSR
set with the set at its previous basicblock's exit (The previous block is the
block laid before the current block). If the saved CSR set at its previous
basicblock's exit is larger, .cfi_restore will be inserted.

The patch also generates proper .cfi_restore in epilogue to make sure the
saved CSR set is consistent for the incoming edges of each block.

Differential Revision: https://reviews.llvm.org/D74303
2020-04-27 12:46:58 -07:00
Craig Topper bdbbed115f [X86][CostModel] Update costs for vector truncate with avx512f/avx512bw.
All avx512 truncate instructions except vXi64->vXi32 are 2 uops
on port 5. So raise their costs to 2. Except when we have an
earlier faster sequence like pshufb for 128 bit input vectors.

Add a lower cost of 3 v16i16->v16i8 with avx512f where we can
extend to v16i32 then truncate. And a cost of 2 for avx512bw with
and without avx512vl. There we can use vpmovwb with either a ymm
or zmm input. Both of these beat masking, splitting, and using
packuswb which is our avx/avx2 codegen.
2020-04-27 12:00:24 -07:00
Stefan Pintilie 1354a03e74 [PowerPC][Future] Implement PC Relative Tail Calls
Tail Calls were initially disabled for PC Relative code because it was not safe
to make certain assumptions about the tail calls (namely that all compiled
functions no longer used the TOC pointer in R2). However, once all of the
TOC pointer references have been removed it is safe to tail call everything
that was tail called prior to the PC relative additions as well as a number of
new cases.
For example, it is now possible to tail call indirect functions as there is no
need to save and restore the TOC pointer for indirect functions if the caller
is marked as may clobber R2 (st_other=1). For the same reason it is now also
possible to tail call functions that are external.

Differential Revision: https://reviews.llvm.org/D77788
2020-04-27 12:55:08 -05:00
Craig Topper 5eff75d86a [X86][CostModel] Improve costs for fp_to_uint/fp_to_sint for vXi8/vXi16/v2i32 results.
Differential Revision: https://reviews.llvm.org/D78893
2020-04-27 10:35:15 -07:00
Fangrui Song 3c9c9c1768 [llvm-objdump] Print target address with evaluateMemoryOperandAddress()
D63847 added `MCInstrAnalysis::evaluateMemoryOperandAddress()`. This patch
leverages the feature to print the target addresses for evaluable instructions.

```
-400a: movl 4080(%rip), %eax
+400a: movl 4080(%rip), %eax  # 5000 <data1>
```

This patch also deletes `MIA->isCall(Inst) || MIA->isUnconditionalBranch(Inst) || MIA->isConditionalBranch(Inst)`
which is used to guard `MCInstrAnalysis::evaluateBranch()`

Reviewed By: jhenderson, skan

Differential Revision: https://reviews.llvm.org/D78776
2020-04-27 09:43:51 -07:00
Jay Foad 498795829b [AMDGPU] Remove odd blank line in debug output. 2020-04-27 17:10:36 +01:00
David Green 61b8af0375 [ARM] Allow fma in tail predicated loops
There are some intrinsics like this that currently block tail
predication, but should be fine. This allows fma through, as the one
that I ran into. There may be others that need the same treatment but
I've only done this one here.

Differential Revision: https://reviews.llvm.org/D78385
2020-04-27 15:32:47 +01:00
Simon Pilgrim d9e174dbf7 [X86][SSE] getFauxShuffle - account for PEXTW/PEXTB implicit zero-extension
The insert(truncate/extend(extract(vec0,c0)),vec1,c1) case in rGacbc5ede99 wasn't combining the 'mineltsize' with the src vector elt size which may be smaller due to implicit extension during extraction.

Reduced from test case provided by @mstorsjo
2020-04-27 12:46:50 +01:00
David Green 7a076418dd [ARM] Replace hasNoSchedulingInfo with UnsupportedFeatures in the A57 schedule
hasNoSchedulingInfo should be used for Pseudo's and other instructions
that are never expected to be scheduled. This removes the flag from new
ARM instructions, instead fixing the A57 schedule by marking the related
architecture features as unsupported.
2020-04-27 10:13:29 +01:00
David Green 8807139026 [ARM] Only produce qadd8b under hasV6Ops
When compiling for a arm5te cpu from clang, the +dsp attribute is set.
This meant we could try and generate qadd8 instructions where we would
end up having no pattern. I've changed the condition here to be hasV6Ops
&& hasDSP, which is what other parts of ARMISelLowering seem to use for
similar instructions.

Fixed PR45677.

Differential Revision: https://reviews.llvm.org/D78877
2020-04-27 10:13:29 +01:00
QingShan Zhang 2957fa0cd1 [NFC][DAGCombine] Adding three helper functions and change the getNegatedExpression to negateExpression
This is a NFC patch for D77319. The idea is to hide the getNegatibleCost inside the getNegatedExpression()
to have it return null if the cost is expensive, and add some helper function for easy to use. And
rename the old getNegatedExpression to negateExpression to avoid the semantic conflict.

Reviewed By: RKSimon

Differential revision: https://reviews.llvm.org/D78291
2020-04-27 04:11:42 +00:00
Craig Topper fc02d9f3c6 [X86] Add cost table entry for v2i32->v2f64 fp_to_uint with avx512.
We're currently getting this from the default implementation. But
I don't like how the cost model came to this answer and I might
be making some changes there.
2020-04-26 19:59:01 -07:00
Simon Pilgrim acbc5ede99 [X86][SSE] getFauxShuffle - support insert(truncate/extend(extract(vec0,c0)),vec1,c1) shuffle patterns at the byte level
Followup to the PR45604 fix at rGe71dd7c011a3 where we disabled most of these cases.

By creating the shuffle at the byte level we can handle any extension/truncation as long as we track how small the scalar got and assume that the upper bytes will need to be zero.
2020-04-26 15:31:01 +01:00
Simon Pilgrim 33f043cc9f X86ISelDAGToDAG.cpp - remove unnecessary includes. NFC.
The X86 specific headers have to include these so we don't need to duplicate.
2020-04-26 14:50:53 +01:00