Commit Graph

55835 Commits

Author SHA1 Message Date
Jay Foad 6461eadf8f [AMDGPU] Handle multiple base operands in shouldClusterMemOps
Summary:
This is in preparation for getMemOperandsWithOffset returning more base
operands.

Depends on D73454.

Reviewers: arsenm, rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73455
2020-01-27 14:45:21 +00:00
Jay Foad fcf5254fa7 [AMDGPU] Handle frame index base operands in memOpsHaveSameBasePtr
Summary:
This is in preparation for getMemOperandsWithOffset returning more base
operands.

Reviewers: arsenm, rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, arphaman, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73454
2020-01-27 14:45:21 +00:00
vpykhtin 4332f1a4c8 [AMDGPU] Fix GCN regpressure trackers for INLINEASM instructions.
Differential revision: https://reviews.llvm.org/D73338
2020-01-27 17:25:25 +03:00
Matt Arsenault 2a160ba5b0 GlobalISel: Reimplement widenScalar for G_UNMERGE_VALUES results
Only use shifts if the requested type exactly matches the source type,
and create sub-unmerges otherwise.
2020-01-27 06:18:26 -08:00
David Green 8a6b948eb5 [MVE] Fixup order of gather writeback intrinsic outputs
The MVE_VLDRWU32_qi_pre gather loads, like the other _pre/_post mve
loads returns the writeback as result 0, the value as result 1. The llvm
ir intrinsic seems to have this the other way around though, and so when
lowering from one to the other we need to switch the first two outputs.

I've also fixed up the types of _pre/_post on normal MVE loads. There we
were already getting the values the right way around, just not for the
types. I don't believe this was causing anything to go wrong, but it was
very confusing to read in the debug output.

Differential Revision: https://reviews.llvm.org/D73370
2020-01-27 14:08:06 +00:00
Sjoerd Meijer b567ff2fa0 [ARM][MVE] Tail-predication: support constant trip count
We had support for runtime trip count values, but not constants, and this adds
supports for that.

And added a minor optimisation while I was add it: don't invoke Cleanup when
there's nothing to clean up.

Differential Revision: https://reviews.llvm.org/D73198
2020-01-27 11:05:26 +00:00
Sam Parker 6c2df5d14f [ARM][LowOverheadLoops] Dont ignore VCTP
When expanding the LoopStart, we try to remove the iteration count
calculation. However, if part of the calculation was also used to
calculate the number of elements we could end up deleting
instructions that were required to feed DLSTP/WLSTP.

Differential Revision: https://reviews.llvm.org/D73275
2020-01-27 10:59:12 +00:00
Petar Avramovic cbf03aee6d [MIPS GlobalISel] Select population count (popcount)
G_CTPOP is generated from llvm.ctpop.<type> intrinsics, clang generates
these intrinsics from __builtin_popcount and __builtin_popcountll.
Add lower and narrow scalar for G_CTPOP.
Lower G_CTPOP for MIPS32.

Differential Revision: https://reviews.llvm.org/D73216
2020-01-27 09:59:50 +01:00
Petar Avramovic 8bc7ba5b9e [MIPS GlobalISel] Select count trailing zeros
llvm.cttz.<type> intrinsic has additional i1 argument is_zero_undef,
it tells whether zero as the first argument produces a defined result.
G_CTTZ is generated from llvm.cttz.<type> (<type> <src>, i1 false)
intrinsics, clang generates these intrinsics from __builtin_ctz and
__builtin_ctzll.
G_CTTZ_ZERO_UNDEF comes from llvm.cttz.<type> (<type> <src>, i1 true).
Clang generates such intrinsics as parts of expansion of builtin_ffs
and builtin_ffsll. It is also traditionally part of and many
algorithms that are now predicated on avoiding zero-value inputs.

Add narrow scalar (algorithm uses G_CTTZ_ZERO_UNDEF) for G_CTTZ.
Lower G_CTTZ and G_CTTZ_ZERO_UNDEF for MIPS32.

Differential Revision: https://reviews.llvm.org/D73215
2020-01-27 09:51:06 +01:00
Petar Avramovic 2b66d32f3f [MIPS GlobalISel] Select count leading zeros
llvm.ctlz.<type> intrinsic has additional i1 argument is_zero_undef,
it tells whether zero as the first argument produces a defined result.
MIPS clz instruction returns 32 for zero input.
G_CTLZ is generated from llvm.ctlz.<type> (<type> <src>, i1 false)
intrinsics, clang generates these intrinsics from __builtin_clz and
__builtin_clzll.
G_CTLZ_ZERO_UNDEF can also be generated from llvm.ctlz with true as
second argument. It is also traditionally part of and many algorithms
that are now predicated on avoiding zero-value inputs.

Add narrow scalar for G_CTLZ (algorithm uses G_CTLZ_ZERO_UNDEF).
Lower G_CTLZ_ZERO_UNDEF and select G_CTLZ for MIPS32.

Differential Revision: https://reviews.llvm.org/D73214
2020-01-27 09:43:38 +01:00
Roman Lebedev 76fcf900d5
[X86][BdVer2] Polish LEA instruction scheduling info
Based on exhaustive llvm-exegesis measurements.
There may still be some imperfections for LEA16r/LEA32r.

Much like was observed in D68646, i'm also measuring some outliers
with some specific registers.
2020-01-26 22:17:27 +03:00
Simon Pilgrim fa19d67a2a [X86][AVX] Extend combineCommutableSHUFP to handle v8f32 and v16f32 commutable shufps patterns 2020-01-26 19:04:12 +00:00
Simon Pilgrim 1a81b296cd [X86][SSE] combineCommutableSHUFP - permilps(shufps(load(),x)) --> permilps(shufps(x,load()))
Pull out combineTargetShuffle code added in rG3fd5d1c6e7db into a helper function and extend it to handle shufps(shufps(load(),x),y) and shufps(y,shufps(load(),x)) cases as well.
2020-01-26 14:36:23 +00:00
Maheaha Shivamallappa 66f93071cd AMDGPU/GlobalISel: Clean-up code around ISel for Intrinsics.
Summary:
A minor code clean-up around ISel for intrinsic llvm.amdgcn.end.cf()

Reviewers: arsenm, mshivama

Reviewed By: arsenm

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73358
2020-01-26 14:09:31 +05:30
Craig Topper 3fdd435a4b [X86] Use a macro to convert X86ISD names to strings in getTargetNodeName.
Every case in the switch had a string version of themselves. Two
of them had a typo that used : instead of ::

By using a macro we can automate the string creation and avoid
the possibility of typos like this.

This is similar to what is done on the AMDGPU target.
2020-01-25 18:27:29 -08:00
Tom Stellard cb297050bb AMDGPU/SILoadStoreOptimizer: Fix uninitialized variable error
This was introduced by 86c944d790 and
caught by the sanitizer-x86_64-linux-fast bot.
2020-01-24 21:53:05 -08:00
Tom Stellard 86c944d790 AMDGPU/SILoadStoreOptimizer: Improve merging of out of order offsets
Summary:
This improves merging of sequences like:

store a, ptr + 4
store b, ptr + 8
store c, ptr + 12
store d, ptr + 16
store e, ptr + 20
store f, ptr

Prior to this patch the basic block was scanned in order to find instructions
to merge and the above sequence would be transformed to:

store4 <a, b, c, d>, ptr + 4
store e, ptr + 20
store r, ptr

With this change, we now sort all the candidate merge instructions by their offset,
so instructions are visited in offset order rather than in the order they appear
in the basic block.  We now transform this sequnce into:

store4 <f, a, b, c>, ptr
store2 <d, e>, ptr + 16

Another benefit of this change is that since we have sorted the mergeable lists
by offset, we can easily check if an instruction is mergeable by checking the
offset of the instruction that becomes before or after it in the sorted list.
Once we determine an instruction is not mergeable we can remove it from the list
and avoid having to do the more expensive mergeablilty checks.

Reviewers: arsenm, pendingchaos, rampitec, nhaehnle, vpykhtin

Reviewed By: arsenm, nhaehnle

Subscribers: kerbowa, merge_guards_bot, kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65966
2020-01-24 19:45:56 -08:00
@justice_adams (Justice Adams) daee63f974 [SelectionDag] Updated FoldConstantArithmetic method signature in preparation for merge with FoldConstantVectorArithmetic
Updated FoldConstantArithmetic method signature to match that of
FoldConstantVectorArithmetic in preparation for merging the two
functions together

https://bugs.llvm.org/show_bug.cgi?id=36544

This is the first step in combining the various
FoldConstantVectorArithmetic and FoldConstantVectorArithmetic
functions into one FoldConstantArithmetic function.

Differential Revision: https://reviews.llvm.org/D72870
2020-01-24 18:00:58 -05:00
Craig Topper 2c1decc040 [X86] Break the loop in LowerReturn into 2 loops. NFCI
I believe for STRICT_FP I need to use a STRICT_FP_EXTEND for the extending to f80 for returning f32/f64 in 32-bit mode when SSE is enabled. The STRICT_FP_EXTEND node requires a Chain. I need to get that node onto the chain before any CopyToRegs are emitted. This is because all the CopyToRegs are glued and chained together. So I can't put a STRICT_FP_EXTEND on the chain between the glued nodes without also glueing the STRICT_ FP_EXTEND.

This patch moves all the extend creation to a first pass and then creates the copytoregs and fills out RetOps in a second pass.

Differential Revision: https://reviews.llvm.org/D72665
2020-01-24 14:44:38 -08:00
Roman Lebedev 70cbf8c71c
[X86] Make `llc --help` output readable again
Long `cl::value_desc()` is added right after the flag name,
before `cl::desc()` column. And thus the `cl::desc()` column,
for all flags, is padded to the right,
which makes the output unreadable.
2020-01-25 01:43:52 +03:00
Heejin Ahn 65eb11306e [WebAssembly] Update bleeding-edge CPU features
Summary:
This adds bulk memory and tail call to "bleeding-edge" CPU, since their
implementation in LLVM/clang seems mostly complete.

Reviewers: tlively

Subscribers: dschuff, sbc100, jgravelle-google, sunfish, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73322
2020-01-24 14:27:35 -08:00
Heejin Ahn 764f4089e8 [WebAssembly] Add reference types target feature
Summary:
This adds the reference types target feature. This does not enable any
more functionality in LLVM/clang for now, but this is necessary to embed
the info in the target features section, which is used by Binaryen and
Emscripten. It turned out that after D69832 `-fwasm-exceptions` crashed
because we didn't have the reference types target feature.

Reviewers: tlively

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73320
2020-01-24 14:26:27 -08:00
Matt Arsenault 3b93945587 AMDGPU/GlobalISel: Select wqm, softwqm and wwm intrinsics 2020-01-24 13:06:44 -08:00
Matt Arsenault 87c46a3129 AMDGPU: Don't error on ds.ordered intrinsic in function
These should be assumed to be called from a compute context. Also
don't use a 2 entry switch over constants.
2020-01-24 13:06:44 -08:00
Stanislav Mekhanoshin be8e38cbd9 Correct NumLoads in clustering
Scheduler sends NumLoads argument into shouldClusterMemOps()
one less the actual cluster length. So for 2 instructions
it will pass just 1. Correct this number.

This is NFC for in tree targets.

Differential Revision: https://reviews.llvm.org/D73292
2020-01-24 12:45:28 -08:00
Matt Arsenault 84e035d8f1 AMDGPU: Don't check constant address space for atomic stores
We define a separate list for storable address spaces. This saves
entry in the matcher table address space list.
2020-01-24 12:15:09 -08:00
Stanislav Mekhanoshin 555d8f4ef5 [AMDGPU] Bundle loads before post-RA scheduler
We are relying on atrificial DAG edges inserted by the
MemOpClusterMutation to keep loads and stores together in the
post-RA scheduler. This does not work all the time since it
allows to schedule a completely independent instruction in the
middle of the cluster.

Removed the DAG mutation and added pass to bundle already
clustered instructions. These bundles are unpacked before the
memory legalizer because it does not work with bundles but also
because it allows to insert waitcounts in the middle of a store
cluster.

Removing artificial edges also allows a more relaxed scheduling.

Differential Revision: https://reviews.llvm.org/D72737
2020-01-24 11:33:38 -08:00
Stanislav Mekhanoshin 44b865fa7f [AMDGPU] Allow narrowing muti-dword loads
Currently BE allows only a little load narrowing because
of the fear it will produce sub-dword ext loads. However,
we can always allow narrowing if we are shrinking one
multi-dword load to another multi-dword load.

In particular we were unable to reduce s_load_dwordx8 into
s_load_dwordx4 if identity shuffle was used to extract
low 4 dwords.

Differential Revision: https://reviews.llvm.org/D73133
2020-01-24 11:03:41 -08:00
Austin Kerbow c226646337 Resubmit: [DA][TTI][AMDGPU] Add option to select GPUDA with TTI
Summary:
Enable the new diveregence analysis by default for AMDGPU.

Resubmit with test updates since GPUDA was causing failures on Windows.

Reviewers: rampitec, nhaehnle, arsenm, thakis

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73315
2020-01-24 10:39:40 -08:00
Yuta Saito c5bd3d0726 Support Swift calling convention for WebAssembly targets
This adds basic support for the Swift calling convention with WebAssembly
targets.

Reviewed By: dschuff

Differential Revision: https://reviews.llvm.org/D71823
2020-01-24 10:30:46 -08:00
David Green b535aa405a [ARM] Use reduction intrinsics for larger than legal reductions
The codegen for splitting a llvm.vector.reduction intrinsic into parts
will be better than the codegen for the generic reductions. This will
only directly effect when vectorization factors are specified by the
user.

Also added tests to make sure the codegen for larger reductions is OK.

Differential Revision: https://reviews.llvm.org/D72257
2020-01-24 17:07:24 +00:00
Kazushi (Jam) Marukawa 0fca35c652 [VE] global variable isel patterns
Summary: Asm expr fixups, isel patterns and tests for global variables addresses.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D73355
2020-01-24 17:35:14 +01:00
Simon Pilgrim 3fd5d1c6e7 [X86][SSE] combineTargetShuffle - permilps(shufps(load(),x)) --> permilps(shufps(x,load()))
Moves lowerShuffleWithSHUFPS commutation code from rG30fcd29fe479 to catch cases during combine
2020-01-24 15:23:20 +00:00
Kazushi (Jam) Marukawa 08ebd8c79e [VE] aligned load/store isel patterns
Summary:
Aligned load/store isel patterns and tests for
i1/i8/16/32/64 (including extension and truncation) and fp32/64.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D73276
2020-01-24 15:16:54 +01:00
Victor Huang 5cee34013c [PowerPC][Future] Add prefixed instruction paddi to future CPU
Future CPU will include support for prefixed instructions.
These prefixed instructions are formed by a 4 byte prefix
immediately followed by a 4 byte instruction effectively
making an 8 byte instruction. The new instruction paddi
is a prefixed form of addi.

This patch adds paddi and all of the support required
for that instruction. The majority of the patch deals with
supporting the new prefixed instructions. The addition of
paddi is mainly to allow for testing.

Differential Revision: https://reviews.llvm.org/D72569
2020-01-24 07:27:25 -06:00
Simon Pilgrim 30fcd29fe4 [X86][SSE] lowerShuffleWithSHUFPS - commute '2*V1+2*V2 elements' mask if it allows a loaded fold
As mentioned on D73023.
2020-01-24 12:04:10 +00:00
Guillaume Chatelet 805c157e8a [Alignment][NFC] Deprecate Align::None()
Summary:
This is a follow up on https://reviews.llvm.org/D71473#inline-647262.
There's a caveat here that `Align(1)` relies on the compiler understanding of `Log2_64` implementation to produce good code. One could use `Align()` as a replacement but I believe it is less clear that the alignment is one in that case.

Reviewers: xbolva00, courbet, bollu

Subscribers: arsenm, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jrtc27, atanasyan, jsji, Jim, kerbowa, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73099
2020-01-24 12:53:58 +01:00
Kerry McLaughlin 4c4861b577 [AArch64][SVE] Add intrinsics for FFR manipulation
Summary:
Implements the following intrinsics:
  - llvm.aarch64.sve.setffr
  - llvm.aarch64.sve.rdffr
  - llvm.aarch64.sve.rdffr.z
  - llvm.aarch64.sve.wrffr

Reviewers: sdesmalen, efriedma, dancgr, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cameron.mcinally, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73097
2020-01-24 10:58:12 +00:00
Sam Parker ddbc077895 [NFC][ARM] Make some params members instead.
Add MachineLoopInfo and ReachingDefAnalysis as members of
LowOverheadLoop instead of passing them several times to different
methods.
2020-01-24 10:19:17 +00:00
Fangrui Song 253379a56f [PowerPC] Delete IsDarwin from AsmPrinter functions 2020-01-24 00:22:24 -08:00
Fangrui Song a50567a31c [PowerPC][MC] Delete PPCMCExpr::IsDarwin 2020-01-23 22:30:08 -08:00
Heejin Ahn 580d7838dd [WebAssembly] Fix resume-only case in Emscripten EH
Summary:
D72308 incorrectly assumed `resume` cannot exist without a `landingpad`,
which is not true. This sets `Changed` to true whenever we make changes
to a function, including creating a call to `__resumeException` within a
function without a landing pad.

Reviewers: tlively

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73308
2020-01-23 18:13:52 -08:00
Kai Wang 838a28e234 [RISCV] Scheduler description for the Rocket core
Pipeline scheduler model for the RISC-V Rocket micro-architecture using the
MIScheduler interface.  Support for both 32 and 64-bit Rocket cores is
implemented.

Differential revision: https://reviews.llvm.org/D68685
2020-01-23 19:36:47 -06:00
Fangrui Song 22467e2595 Add function attribute "patchable-function-prefix" to support -fpatchable-function-entry=N,M where M>0
Similar to the function attribute `prefix` (prefix data),
"patchable-function-prefix" inserts data (M NOPs) before the function
entry label.

-fpatchable-function-entry=2,1 (1 NOP before entry, 1 NOP after entry)
will look like:

```
  .type	foo,@function
.Ltmp0:               # @foo
  nop
foo:
.Lfunc_begin0:
  # optional `bti c` (AArch64 Branch Target Identification) or
  # `endbr64` (Intel Indirect Branch Tracking)
  nop

  .section  __patchable_function_entries,"awo",@progbits,get,unique,0
  .p2align  3
  .quad .Ltmp0
```

-fpatchable-function-entry=N,0 + -mbranch-protection=bti/-fcf-protection=branch has two reasonable
placements (https://gcc.gnu.org/ml/gcc-patches/2020-01/msg01185.html):

```
(a)         (b)

func:       func:
.Ltmp0:     bti c
  bti c     .Ltmp0:
  nop       nop
```

(a) needs no additional code. If the consensus is to go for (b), we will
need more code in AArch64BranchTargets.cpp / X86IndirectBranchTracking.cpp .

Differential Revision: https://reviews.llvm.org/D73070
2020-01-23 17:02:27 -08:00
Changpeng Fang 2531535984 AMDGPU: Implement FDIV optimizations in AMDGPUCodeGenPrepare
Summary:
      RCP has the accuracy limit. If FDIV fpmath require high accuracy rcp may not
    meet the requirement. However, in DAG lowering, fpmath information gets lost,
    and thus we may generate either inaccurate rcp related computation or slow code
    for fdiv.

    In patch implements fdiv optimizations in the AMDGPUCodeGenPrepare, which could
    exactly know !fpmath.

     FastUnsafeRcpLegal: We determine whether it is legal to use rcp based on
                         unsafe-fp-math, fast math flags, denormals and fpmath
                         accuracy request.

     RCP Optimizations:
       1/x -> rcp(x) when fast unsafe rcp is legal or fpmath >= 2.5ULP with
                                                      denormals flushed.
       a/b -> a*rcp(b) when fast unsafe rcp is legal.

     Use fdiv.fast:
       a/b -> fdiv.fast(a, b) when RCP optimization is not performed and
                              fpmath >= 2.5ULP with denormals flushed.

       1/x -> fdiv.fast(1,x)  when RCP optimization is not performed and
                              fpmath >= 2.5ULP with denormals.

    Reviewers:
      arsenm

    Differential Revision:
      https://reviews.llvm.org/D71293
2020-01-23 16:57:43 -08:00
Amara Emerson 44b496758f [AArch64][GlobalISel] Remove duplicate attribute lookup code that was supposed to be cached. NFC.
When I cached this a long time ago it seems I forgot to remove the locally
declared variable of the same name in select(), so the caching wasn't having
any compile time benefit. Doh.
2020-01-23 15:50:08 -08:00
Amara Emerson 765b37abdf [AArch64][GlobalISel] Fallback if the +strict-align target feature is given.
Works around PR44246.
2020-01-23 14:45:29 -08:00
Matt Arsenault 86e5b56a7c AMDGPU/GlobalISel: Fix RegBanKSelect for llvm.amdgcn.exp.compr
This wasn't updated for the immarg handling change. We really need a
verifier for this.
2020-01-23 13:30:46 -08:00
Matt Arsenault fac9941e57 AMDGPU: Fix ubsan error
Since register classes go up to 1024, 32 elements, all masks bits are
needed and a 32-bit shift by 32 is illegal. We didn't have any
instructions theoretically using a 32 element VGPR before
d1dbb5e471
2020-01-23 15:05:47 -05:00
Danilo Carvalho Grael 58ceb81d31 [SVE] Add SVE2 patterns for unpredicated multiply instructions
Summary:
Add patterns for SVE2 unpredicated multiply instructions:
- mul, smulh, umulh, pmul, sqdmulh, sqrdmulh

Reviewers: sdesmalen, huntergr, efriedma, c-rhodes, kmclaughlin, rengolin

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72799
2020-01-23 13:20:53 -05:00
Simon Pilgrim 0ec25a0316 [X86] LowerRotate - early out for vector rotates by zero 2020-01-23 17:48:09 +00:00
Matt Arsenault 618fa77ae4 AMDGPU/GlobalISel: Select V_ADD3_U32/V_XOR3_B32
The other 3-op patterns should also be theoretically handled, but
currently there's a bug in the inferred pattern complexity.

I'm not sure what the error handling strategy should be for potential
constant bus violations. I think the correct strategy is to never
produce mixed SGPR and VGPR operands in a typical VOP instruction,
which will trivially avoid them. However, it's possible to still have
hand written MIR (or erroneously transformed code) with these
operands. When these fold, the restriction will be violated. We
currently don't have any verifiers for reg bank legality. For now,
just ignore the restriction.

It might be worth triggering a DAG fallback on verifier error.
2020-01-23 12:04:20 -05:00
Guillaume Chatelet 59f95222d4 [Alignment][NFC] Use Align with CreateAlignedStore
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet, bollu

Subscribers: arsenm, jvesely, nhaehnle, hiraditya, kerbowa, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73274
2020-01-23 17:34:32 +01:00
Matt Arsenault dfec702290 AMDGPU: Check for other uses when looking through casted select
Fixes mesa regression on ext_transform_feedback-max-varyings
2020-01-23 11:31:24 -05:00
Krzysztof Parzyszek cc4b716a37 [Hexagon] Remove unused operand definitions: s10_0Imm and s10_6Imm 2020-01-23 09:38:54 -06:00
Kazushi (Jam) Marukawa 784204fd7e [VE] add, sub, left/right shift isel patterns
Summary: Add, sub, left/right shift isel patterns and tests for i32/i64 and fp32/fp64.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D73207
2020-01-23 16:00:37 +01:00
Simon Moll 9187073f3e [VE][NFC] re-write RR* isel class using null_frag
Summary: Re-write RR* using null_frag to avoid duplication in upcoming patches.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D73259
2020-01-23 15:17:45 +01:00
Jay Foad b482e1bfe2 [CodeGen] Make use of MachineInstrBuilder::getReg
Reviewers: arsenm

Subscribers: wdng, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73262
2020-01-23 13:38:13 +00:00
Guillaume Chatelet 279fa8e006 [Alignement][NFC] Deprecate untyped CreateAlignedLoad
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: arsenm, jvesely, nhaehnle, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73260
2020-01-23 13:34:32 +01:00
Kerry McLaughlin aa0f37e14a [AArch64][SVE] Add first-faulting load intrinsic
Summary:
Implements the llvm.aarch64.sve.ldff1 intrinsic and DAG
combine rules for first-faulting loads with sign & zero extends

Reviewers: sdesmalen, efriedma, andwar, dancgr, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cameron.mcinally, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73025
2020-01-23 11:57:16 +00:00
Simon Tatham 4321c6af28 [ARM,MVE] Support immediate vbicq,vorrq,vmvnq intrinsics.
Summary:
Immediate vmvnq is code-generated as a simple vector constant in IR,
and left to the backend to recognize that it can be created with an
MVE VMVN instruction. The predicated version is represented as a
select between the input and the same constant, and I've added a
Tablegen isel rule to turn that into a predicated VMVN. (That should
be better than the previous VMVN + VPSEL: it's the same number of
instructions but now it can fold into an adjacent VPT block.)

The unpredicated forms of VBIC and VORR are done by enabling the same
isel lowering as for NEON, recognizing appropriate immediates and
rewriting them as ARMISD::VBICIMM / ARMISD::VORRIMM SDNodes, which I
then instruction-select into the right MVE instructions (now that I've
also reworked those instructions to use the same MC operand encoding).
In order to do that, I had to promote the Tablegen SDNode instance
`NEONvorrImm` to a general `ARMvorrImm` available in MVE as well, and
similarly for `NEONvbicImm`.

The predicated forms of VBIC and VORR are represented as a vector
select between the original input vector and the output of the
unpredicated operation. The main convenience of this is that it still
lets me use the existing isel lowering for VBICIMM/VORRIMM, and not
have to write another copy of the operand encoding translation code.

This intrinsic family is the first to use the `imm_simd` system I put
into the MveEmitter tablegen backend. So, naturally, it showed up a
bug or two (emitting bogus range checks and the like). Fixed those,
and added a full set of tests for the permissible immediates in the
existing Sema test.

Also adjusted the isel pattern for `vmovlb.u8`, which stopped matching
because lowering started turning its input into a VBICIMM. Now it
recognizes the VBICIMM instead.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72934
2020-01-23 11:53:52 +00:00
Simon Tatham 772e493193 [ARM,MVE] Revise immediate VBIC/VORR to look more like NEON.
Summary:
In NEON, the immediate forms of VBIC and VORR are each represented as
a single MC instruction, which takes its immediate operand already
encoded in a NEON-friendly format: 8 data bits, plus some control bits
indicating how to expand them into a full vector.

In MVE, we represented immediate VBIC and VORR as four separate MC
instructions each, for an 8-bit immediate shifted left by 0, 8, 16 or
24 bits. For each one, the value of the immediate operand is in the
'natural' form, i.e. the numerical value that would actually be BICed
or ORRed into each vector lane (and also the same value shown in
assembly). For example, MVE_VBICIZ16v4i32 takes an operand such as
0xab0000, which NEON would represent as 0xab | (control bits << 8).

The MVE approach is superficially nice (it makes assembly input and
output easy, and it's also nice if you're manually constructing
immediate VBICs). But it turns out that it's better for isel if we
make the NEON and MVE instructions work the same, because the
ARMISD::VBICIMM and VORRIMM node types already encode their immediate
into the NEON format, so it's easier if we can just use it.

Also, this commit reduces the total amount of code rather than
increasing it, which is surely an indication that it really is simpler
to do it this way!

Reviewers: dmgreen, ostannard, miyuki, MarkMurrayARM

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73205
2020-01-23 11:53:52 +00:00
Matt Arsenault 4d14772f5c AMDGPU/GlobalISel: Remove redundant or patterns
These ended up with higher priority than or3 patterns in a future
patch. This also fixes the using VOP2 forms.
2020-01-22 21:45:51 -05:00
James Clarke 3f5976c97d [RISCV] Fix evaluating %pcrel_lo against global and weak symbols
Summary:
Previously, we would erroneously turn %pcrel_lo(label), where label has
a %pcrel_hi against a weak symbol, into %pcrel_lo(label + offset), as
evaluatePCRelLo would believe the target independent logic was going to
fold it. Moreover, even if that were fixed, shouldForceRelocation lacks
an MCAsmLayout and thus cannot evaluate the %pcrel_hi fixup to a value
and check the symbol, so we would then erroneously constant-fold the
%pcrel_lo whilst leaving the %pcrel_hi intact. After D72197, this same
sequence also occurs for symbols with global binding, which is triggered
in real-world code.

Instead, as discussed in D71978, we introduce a new FKF_IsTarget flag to
avoid these kinds of issues. All the resolution logic happens in one
place, with no coordination required between RISCAsmBackend and
RISCVMCExpr to ensure they implement the same logic twice. Although the
implementation of %pcrel_hi can be left as target independent, we make
it target dependent to ensure that they are handled identically to
%pcrel_lo, otherwise we risk one of them being constant folded but the
other being preserved. This also allows us to properly support fixup
pairs where the instructions are in different fragments.

Reviewers: asb, lenary, efriedma

Reviewed By: efriedma

Subscribers: arichardson, hiraditya, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73211
2020-01-23 02:05:48 +00:00
Florian Hahn 39ae86ab72 [AArch64TTI] AArch64 supports NT vector stores through STNP.
This patch adds a custom implementation of isLegalNTStore to AArch64TTI
that supports vector types that can be directly stored by STNP. Note
that the implementation may not catch all valid cases (e.g. because the
vector is a multiple of 256 and could be broken down to multiple valid 256 bit
stores), but it is good enough for LV to vectorize loops with NT stores,
as LV only passes in a vector with 2 elements to check. LV seems to also
be the only user of isLegalNTStore.

We should also do the same for NT loads, but before that we need to
ensure that we properly lower LDNP of vectors, similar to D72919.

Reviewers: dmgreen, samparker, t.p.northover, ab

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D73158
2020-01-22 16:45:24 -08:00
Sean Fertile 9aa816a816 [PowerPC] Collect some CallLowering arguments into a struct. [NFC]
Collect the calling convention and a number of boolean arguments into a
structure to slightly reduces the number of arguments passed around between
LowerCall_<Subtarget>, FinishCall and a few of the helpers. Also
calulates if a call is indirect once using the exisitng helper and caches the
result replacing several instances where we duplicated the logic determining if
a call is indirect.
2020-01-22 16:55:27 -05:00
Sanjay Patel 363d27c871 [x86] fold vperm2x128 to concat of 128-bit high half vectors
vperm (ins ?, X, C), (ins ?, Y, C), 0x31 --> concat X, Y

This is another shuffle problem seen with PR42024:
https://bugs.llvm.org/show_bug.cgi?id=42024

We have this small crack in legalization/lowering/combining/demanded
that allows forming a vperm2f128 of high halves with AVX1 when we
could do better by peeking through the insert_subvector nodes.
AFAICT, it requires IR as shown in the diffs - much larger than legal
vectors - to avoid all of the usual folds.

Another option would prevent forming the 256-bit vperm in lowering.

Differential Revision: https://reviews.llvm.org/D73197
2020-01-22 15:35:50 -05:00
Jan Vesely 1b8eab179d AMDGPU/R600: Emit rodata in text segment
R600 relies on this behaviour.
Fixes: 6e18266aa4 ('Partially revert D61491 "AMDGPU: Be explicit about whether the high-word in SI_PC_ADD_REL_OFFSET is 0"')
Fixes ~100 piglit regressions since 6e18266

Differential Revision: https://reviews.llvm.org/D72991
2020-01-22 14:31:51 -05:00
Simon Pilgrim 5340434c94 [X86][SSE] combineExtractWithShuffle - extract(bitcast(broadcast(x))) --> x
Removes some unnecessary gpr<-->fpu traffic
2020-01-22 18:02:58 +00:00
David Green 58991ba773 [ARM] Mark MVE loads/store as not having side effects
The hasSideEffect parameter is usually automatically inferred from
instruction patterns. For some of our MVE instructions, we do not have
patterns though, such as for the pre/post inc loads and stores. This
instead specifies the flag manually on the base MVE_VLDRSTR_base
tablegen class, making sure we get this correct.

This can help with scheduling multiple loads more optimally. Here I've
added a unittest as a more direct form of testing.

Differential Revision: https://reviews.llvm.org/D73117
2020-01-22 17:56:55 +00:00
Nico Weber cd470717d1 Revert "[DA][TTI][AMDGPU] Add option to select GPUDA with TTI"
This reverts commit a90a6502ab.
Broke tests on Windows: http://lab.llvm.org:8011/builders/clang-x64-windows-msvc/builds/13808
2020-01-22 12:56:19 -05:00
Florian Hahn 300997c41a [AArch64] Don't rename registers with pseudo defs in Ld/St opt.
If the root def of for renaming is a noop-pseudo instruction like kill,
we would end up without a correct def for the renamed register, causing
miscompiles.

This patch conservatively bails out on any pseudo instruction.

This fixes https://bugs.chromium.org/p/chromium/issues/detail?id=1037912#c70
2020-01-22 09:26:25 -08:00
Matt Arsenault 1192d7b254 AMDGPU/GlobalISel: Handle 16-bank LDS llvm.amdgcn.interp.p1.f16
The pattern is also mishandled by the generated matcher, so workaround
this as in the DAG path.

The existing DAG tests aren't particularly targeted to just this one
intrinsic. These also end up differing in scheduling from SGPR->VGPR
operand constraint copies.
2020-01-22 12:10:59 -05:00
David Tenty 45a4aaea7f [NFC][XCOFF] Refactor Csect creation into TargetLoweringObjectFile
Summary:
We create a number of standard types of control sections in multiple places for
things like the function descriptors, external references and the TOC anchor
among others, so it is possible for  their properties to be defined
inconsistently in different places. This refactor moves their creation and
properties into functions in the TargetLoweringObjectFile class hierarchy, where
functions for retrieving various special types of sections typically seem
to reside.

Note: There is one case in PPCISelLowering which is specific to function entry
points which we don't address since we don't have access to the TLOF there.

Reviewers: DiggerLin, jasonliu, hubert.reinterpretcast

Reviewed By: jasonliu, hubert.reinterpretcast

Subscribers: wuzish, nemanjai, hiraditya, kbarton, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72347
2020-01-22 12:09:11 -05:00
Matt Arsenault c05f23e409 AMDGPU/GlobalISel: Select llvm.amdgcn.mov.dpp
This is deprecated, but easy to support.
2020-01-22 11:43:53 -05:00
Matt Arsenault dd09ec1208 AMDGPU/GlobalISel: Select llvm.amdgcn.mov.dpp8 2020-01-22 11:43:40 -05:00
Matt Arsenault 0bf434ccd5 AMDGPU: Fix element size assertion
The GlobalISel usage called this with bits, but the DAG usage was
incorrectly using bytes.
2020-01-22 11:18:45 -05:00
Matt Arsenault bb562d1af0 AMDGPU/GlobalISel: Keep G_BITCAST out of waterfall loop
The waterfall utility function blindly inserts a phi for every def in
the loop. We don't need this one to be preserved for every
iteration. Saves an extra phi and copy inside the loop body.
2020-01-22 11:16:19 -05:00
Zakk Chen 0cb274de39 [RISCV] Support ABI checking with per function target-features
1. if users don't specific -mattr, the default target-feature come
from IR attribute.
2. fixed bug and re-land this patch

Reviewers: lenary, asb

Reviewed By: lenary

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70837
2020-01-22 08:12:28 -08:00
Simon Pilgrim a14aa7dabd [X86][SSE] combineExtractWithShuffle - extract(bictcast(scalar_to_vector(x))) --> x
Removes some unnecessary gpr<-->fpu traffic
2020-01-22 16:11:08 +00:00
Matt Arsenault 52ec7379ad AMDGPU/GlobalISel: Fold add of constant into G_INSERT_VECTOR_ELT
Move the subregister base like in the extract case.
2020-01-22 11:09:15 -05:00
Matt Arsenault d1dbb5e471 AMDGPU/GlobalISel: Select G_INSERT_VECTOR_ELT 2020-01-22 11:00:49 -05:00
Matt Arsenault 3524d4412c AMDGPU/GlobalISel: Fix RegBankSelect for G_INSERT_VECTOR_ELT
The result and source vector are going to be tied, so these need to be
the same bank.

The inserted value also needs to be broken down based on the result
bank, not the inserted value itself.
2020-01-22 10:57:50 -05:00
Matt Arsenault e3d352c541 AMDGPU/GlobalISel: Fold constant offset vector extract indexes
Handle dynamic vector extracts that use an index that's an add of a
constant offset into moving the base subregister of the indexing
operation.

Force the add into the loop in regbankselect, which will be recognized
when selected.
2020-01-22 10:50:59 -05:00
Kazushi (Jam) Marukawa 83b67526d5 [VE] select and selectcc patterns
Summary: select and selectcc isel patterns and tests for i32/i64 and fp32/fp64.
Includes optimized selectcc patterns for fmin/fmax/maxs/mins.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D73195
2020-01-22 16:30:38 +01:00
Matt Arsenault e93e1b621c AMDGPU: Fix typo 2020-01-22 10:17:46 -05:00
Matt Arsenault 2fe500ab5b AMDGPU: Look through casted selects to constant fold bin ops
The promotion of the uniform select to i32 interfered with this fold.
2020-01-22 10:16:39 -05:00
Matt Arsenault bcd91778fe AMDGPU: Do binop of select of constant fold in AMDGPUCodeGenPrepare
DAGCombiner does this, but divisions expanded here miss this
optimization. Since 67aa18f165,
divisions have been expanded here and missed out on this
optimization. Avoids test regressions in a future patch.
2020-01-22 10:16:39 -05:00
Matt Arsenault a174f0da62 AMDGPU/GlobalISel: Add pre-legalize combiner pass
Just copy the AArch64 pass as-is for now, except for removing the
memcpy handling.
2020-01-22 10:16:39 -05:00
Kazushi (Jam) Marukawa dc69265eea [VE] setcc isel patterns
Summary: SETCC isel patterns and tests for i32/64 and fp32/64 comparison

Reviewers: arsenm, rengolin, craig.topper, k-ishizaka

Reviewed By: arsenm

Subscribers: merge_guards_bot, wdng, hiraditya, llvm-commits

Tags: #ve, #llvm

Differential Revision: https://reviews.llvm.org/D73171
2020-01-22 15:45:57 +01:00
David Green e9c198278e [ARM] Basic gather scatter cost model
This is a very basic MVE gather/scatter cost model, based roughly on the
code that we will currently produce. It does not handle truncating
scatters or extending gathers correctly yet, as it is difficult to tell
that they are going to be correctly extended/truncated from the limited
information in the cost function.

This can be improved as we extend support for these in the future.

Based on code originally written by David Sherwood.

Differential Revision: https://reviews.llvm.org/D73021
2020-01-22 14:41:38 +00:00
Sander de Smalen 4cf16efe49 [AArch64][SVE] Add patterns for unpredicated load/store to frame-indices.
This patch also fixes up a number of cases in DAGCombine and
SelectionDAGBuilder where the size of a scalable vector is used in a
fixed-width context (thus triggering an assertion failure).

Reviewers: efriedma, c-rhodes, rovka, cameron.mcinally

Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71215
2020-01-22 14:32:27 +00:00
Jay Foad e0f0d0e55c [MachineScheduler] Allow clustering mem ops with complex addresses
The generic BaseMemOpClusterMutation calls into TargetInstrInfo to
analyze the address of each load/store instruction, and again to decide
whether two instructions should be clustered. Previously this had to
represent each address as a single base operand plus a constant byte
offset. This patch extends it to support any number of base operands.

The old target hook getMemOperandWithOffset is now a convenience
function for callers that are only prepared to handle a single base
operand. It calls the new more general target hook
getMemOperandsWithOffset.

The only requirements for the base operands returned by
getMemOperandsWithOffset are:
- they can be sorted by MemOpInfo::Compare, such that clusterable ops
  get sorted next to each other, and
- shouldClusterMemOps knows what they mean.

One simple follow-on is to enable clustering of AMDGPU FLAT instructions
with both vaddr and saddr (base register + offset register). I've left
a FIXME in the code for this case.

Differential Revision: https://reviews.llvm.org/D71655
2020-01-22 14:28:24 +00:00
Matt Arsenault 70096ca111 AMDGPU/GlobalISel: Fix RegbankSelect for llvm.amdgcn.fmul.legacy 2020-01-22 09:26:17 -05:00
Matt Arsenault a722cbf77c AMDGPU/GlobalISel: Handle atomic_inc/atomic_dec
The intermediate instruction drops the extra volatile argument. We are
missing an atomic ordering on these.
2020-01-22 09:26:17 -05:00
Matt Arsenault 9c928649a0 AMDGPU: Fix interaction of tfe and d16
This using the wrong result register, and dropping the result entirely
for v2f16. This would fail to select on the scalar case. I believe it
was also mishandling packed/unpacked subtargets.
2020-01-22 09:26:17 -05:00
Matt Arsenault b94d3b9b77 AMDGPU/GlobalISel: RegBankSelect interp intrinsics
Note this assumes the future use of immediates for immarg, not the
current G_CONSTANT which will be emitted.
2020-01-22 09:01:34 -05:00
Matt Arsenault 64e9528201 AMDGPU: Fix missing immarg on llvm.amdgcn.interp.mov
The first operand maps to an immediate field, so this should be
immarg.
2020-01-22 09:01:34 -05:00
Simon Pilgrim c784e5451b Use SelectionDAG::getShiftAmountConstant(). NFCI. 2020-01-22 13:52:43 +00:00
Simon Pilgrim 963f268186 [X86][SSE] combineExtractWithShuffle - pull out repeated extract index code. NFCI. 2020-01-22 12:08:58 +00:00
Kerry McLaughlin cdcc4f2a44 [AArch64][SVE] Add intrinsic for non-faulting loads
Summary:
This patch adds the llvm.aarch64.sve.ldnf1 intrinsic, plus
DAG combine rules for non-faulting loads and sign/zero extends

Reviewers: sdesmalen, efriedma, andwar, dancgr, mgudim, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cameron.mcinally, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71698
2020-01-22 11:15:20 +00:00
Sander de Smalen 67d4c9924c Add support for (expressing) vscale.
In LLVM IR, vscale can be represented with an intrinsic. For some targets,
this is equivalent to the constexpr:

  getelementptr <vscale x 1 x i8>, <vscale x 1 x i8>* null, i32 1

This can be used to propagate the value in CodeGenPrepare.

In ISel we add a node that can be legalized to one or more
instructions to materialize the runtime vector length.

This patch also adds SVE CodeGen support for VSCALE, which maps this
node to RDVL instructions (for scaled multiples of 16bytes) or CNT[HSD]
instructions (scaled multiples of 2, 4, or 8 bytes, respectively).

Reviewers: rengolin, cameron.mcinally, hfinkel, sebpop, SjoerdMeijer, efriedma, lattner

Reviewed by: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68203
2020-01-22 10:09:27 +00:00
Sam Parker c04b9ba595 [ARM][MVE] Clear MaskedInsts vector
In MVETailPredication, clear the vector before running on a new loop.

Differential Revision: https://reviews.llvm.org/D73048
2020-01-22 04:27:36 -05:00
Kazushi (Jam) Marukawa 3a906a9f4e [VE] i<N> and fp32/64 arguments, return values and constants
Summary:
Support for i<N> and fp32/64 arguments (in register), return values
and constants along with tests.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D73092
2020-01-22 09:17:44 +01:00
Amara Emerson 2e25d75aaa [AArch64][GlobalISel] Fix llvm.returnaddress(0) selection when LR is clobbered.
The code was originally ported from SelectionDAG, which does CSE behind the scenes
automatically. When copying the return address from LR live into the function, we
need to make sure to use the single copy on function entry. Any later copy from LR
could be using clobbered junk.

Implement this by caching the copy in the per-MF state in the selector.

Should hopefully fix the AArch64 sanitiser buildbot failure.
2020-01-21 22:53:32 -08:00
Austin Kerbow a90a6502ab [DA][TTI][AMDGPU] Add option to select GPUDA with TTI
Summary: Enable the new diveregence analysis by default for AMDGPU.

Reviewers: rampitec, nhaehnle, arsenm

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73049
2020-01-21 21:13:20 -08:00
Carl Ritson 6b4b3e2856 [AMDGPU] SIRemoveShortExecBranches should not remove branches exiting loops
Summary:
Check that a s_cbranch_execz is not a loop exit before removing it.
As the pass is generating infinite loops.

Reviewers: cdevadas, arsenm, nhaehnle

Reviewed By: nhaehnle

Subscribers: kzhuravl, jvesely, wdng, yaxunl, tpr, t-tye, hiraditya, kerbowa, llvm-commits, dstuttard, foad

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72997
2020-01-22 13:18:40 +09:00
cdevadas e53a9d96e6 Resubmit: [AMDGPU] Invert the handling of skip insertion.
The current implementation of skip insertion (SIInsertSkip) makes it a
mandatory pass required for correctness. Initially, the idea was to
have an optional pass. This patch inserts the s_cbranch_execz upfront
during SILowerControlFlow to skip over the sections of code when no
lanes are active. Later, SIRemoveShortExecBranches removes the skips
for short branches, unless there is a sideeffect and the skip branch is
really necessary.

This new pass will replace the handling of skip insertion in the
existing SIInsertSkip Pass.

Differential revision: https://reviews.llvm.org/D68092
2020-01-22 13:18:32 +09:00
Amara Emerson 67a8775322 [AArch64] Don't generate gpr CSEL instructions in early-ifcvt if regclasses aren't compatible.
In GlobalISel we may in some unfortunate circumstances generate PHIs with
operands that are on separate banks. If-conversion doesn't currently check for
that case and ends up generating a CSEL on AArch64 with incorrect register
operands.

Differential Revision: https://reviews.llvm.org/D72961
2020-01-21 16:51:31 -08:00
Florian Hahn 535ed62c5f [AArch64] Add custom store lowering for 256 bit non-temporal stores.
Currently we fail to lower non-termporal stores for 256+ bit vectors
to STNPQ, because type legalization will split them up to 128 bit stores
and because there are no single non-temporal stores, creating STPNQ
in the Load/Store optimizer would be quite tricky.

This patch adds custom lowering for 256 bit non-temporal vector stores
to improve the generated code.

Reviewers: dmgreen, samparker, t.p.northover, ab

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D72919
2020-01-21 14:53:40 -08:00
Matt Arsenault e47965bf64 AMDGPU/GlobalISel: Merge trivial legalize rules
Also move constant-like rules together
2020-01-21 17:37:19 -05:00
Matt Arsenault 9a5a6e9465 AMDGPU/GlobalISel: Merge G_PTR_ADD/G_PTR_MASK rules 2020-01-21 16:57:01 -05:00
Matt Arsenault fd109308a7 AMDGPU/GlobalISel: Legalize G_PTR_ADD for arbitrary pointers
Pointers of unrecognized address spaces shoudl be treated as
global-like pointers. Even if loads and stores of them aren't handled,
dumb operations that just operate on the bits should work.
2020-01-21 16:35:36 -05:00
Thomas Lively 28857d14a8 [WebAssembly] Split and recombine multivalue calls for ISel
Summary:
Multivalue calls both take and return an arbitrary number of
arguments, but ISel only supports one or the other in a single
instruction. To get around this, calls are modeled as two pseudo
instructions during ISel. These pseudo instructions, CALL_PARAMS and
CALL_RESULTS, are recombined into a single CALL MachineInstr in a
custom emit hook.

RegStackification and the MC layer will additionally need to be made
aware of multivalue calls before the tests will produce correct
output.

Reviewers: aheejin, dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71496
2020-01-21 11:31:33 -08:00
Thomas Lively 3ef169e586 [WebAssembly][InstrEmitter] Foundation for multivalue call lowering
Summary:
WebAssembly is unique among upstream targets in that it does not at
any point use physical registers to store values. Instead, it uses
virtual registers to model positions in its value stack. This means
that some target-independent lowering activities that would use
physical registers need to use virtual registers instead for
WebAssembly and similar downstream targets. This CL generalizes the
existing `usesPhysRegsForPEI` lowering hook to
`usesPhysRegsForValues` in preparation for using it in more places.

One such place is in InstrEmitter for instructions that have variadic
defs. On register machines, it only makes sense for these defs to be
physical registers, but for WebAssembly they must be virtual registers
like any other values. This CL changes InstrEmitter to check the new
target lowering hook to determine whether variadic defs should be
physical or virtual registers.

These changes are necessary to support a generalized CALL instruction
for WebAssembly that is capable of returning an arbitrary number of
arguments. Fully implementing that instruction will require additional
changes that are described in comments here but left for a follow up
commit.

Reviewers: aheejin, dschuff, qcolombet

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71484
2020-01-21 11:13:46 -08:00
Fangrui Song 8e1f0974c2 [PowerPC] Delete PPCSubtarget::isDarwin and isDarwinABI
http://lists.llvm.org/pipermail/llvm-dev/2018-August/125614.html developers have agreed to remove Darwin support from POWER backends.

Reviewed By: sfertile

Differential Revision: https://reviews.llvm.org/D72067
2020-01-21 09:54:44 -08:00
Jonas Devlieghere 72b8bad150 [lldb/Hexagon] Include <mutex>
Fixes compiler error on macOS: error: no type named 'mutex' in namespace
'std'.
2020-01-21 09:51:30 -08:00
Krzysztof Parzyszek 305bf5b21d [Hexagon] Add support for Hexagon v67t microarchitecture (tiny core) 2020-01-21 11:35:10 -06:00
Krzysztof Parzyszek 020041d99b Update spelling of {analyze,insert,remove}Branch in strings and comments
These names have been changed from CamelCase to camelCase, but there were
many places (comments mostly) that still used the old names.

This change is NFC.
2020-01-21 10:15:38 -06:00
Zakk Chen 1256d68093 [RISCV] Check the target-abi module flag matches the option
Reviewers: lenary, asb

Reviewed By: lenary

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72768
2020-01-21 07:32:12 -08:00
Jinsong Ji d7032bc3c0 [PowerPC][NFC] Reclaim TSFlags bit 6
We removed UseVSXReg flag in https://reviews.llvm.org/D58685
But we did not reclain the bit 6 it was assigned,
this will become confusing and a hole later..
We should reclaim it as early as possible before new bits.

Reviewed By: sfertile

Differential Revision: https://reviews.llvm.org/D72649
2020-01-21 15:04:05 +00:00
Simon Pilgrim b065902ed4 [X86] combineBT - use SimplifyDemandedBits instead of GetDemandedBits
Another step towards removing SelectionDAG::GetDemandedBits entirely
2020-01-21 14:24:46 +00:00
Anna Welker ff9877ce34 [ARM][MVE] Enable masked scatter
Extends the gather/scatter pass in MVEGatherScatterLowering.cpp to
enable the transformation of masked scatters into calls to MVE's masked
scatter intrinsic.

Differential Revision: https://reviews.llvm.org/D72856
2020-01-21 09:46:26 +00:00
Nicolai Hähnle a80291ce10 Revert "[AMDGPU] Invert the handling of skip insertion."
This reverts commit 0dc6c249bf.

The commit is reported to cause a regression in piglit/bin/glsl-vs-loop for
Mesa.
2020-01-21 09:17:25 +01:00
Fangrui Song 5721483b64 [AMDGPU] Fix -Wunused-variable after e5823bf806 2020-01-20 22:41:13 -08:00
Matt Arsenault c72aa27f91 AMDDGPU/GlobalISel: Fix RegBankSelect for llvm.amdgcn.ps.live 2020-01-20 23:21:53 -05:00
Matt Arsenault e5823bf806 AMDGPU: Don't create weird sized integers
There's no reason to introduce a new, unnaturally sized value
here. This has a chance to produce worse code with
legalization. Avoids regression in a future patch.
2020-01-20 20:02:54 -05:00
Matt Arsenault 9b13b4a0e3 AMDGPU: Prepare to use scalar register indexing
Define pseudos mirroring the the VGPR indexing ones, and adjust the
operands in the s_movrel* instructions to avoid the result def.
2020-01-20 17:19:16 -05:00
Matt Arsenault 8615eeb455 AMDGPU: Partially merge indirect register write handling
a785209bc2 switched to using a pseudos instead of manually tying
operands on the regular instruction. The VGPR indexing mode path
should have the same problems that change attempted to avoid, so these
should use the same strategy.

Use a single pseudo for the VGPR indexing mode and movreld paths, and
expand it based on the subtarget later. These have essentially the
same constraints, reading the index from m0.

Switch from using an offset to the subregister index directly, instead
of computing an offset and re-adding it back. Also add missing pseudos
for existing register class sizes.
2020-01-20 17:19:16 -05:00
Krzysztof Parzyszek c12a5917d2 [Hexagon] Add support for Hexagon/HVX v67 ISA 2020-01-20 16:16:49 -06:00
Matt Arsenault f6418d72f5 AMDGPU/GlobalISel: Add documentation for RegisterBankInfo
Document some high level strategies that should be used for register
bank selection. The constant bus restriction section hasn't actually
been implemented yet.
2020-01-20 15:41:25 -05:00
Sid Manning 7fee4fed4c Add support for Linux/Musl ABI
Differential revision: https://reviews.llvm.org/D72701

The patch adds a new option ABI for Hexagon. It primary deals with
the way variable arguments are passed and is use in the Hexagon Linux Musl
environment.

If a callee function has a variable argument list, it must perform the
following operations to set up its function prologue:

  1. Determine the number of registers which could have been used for passing
     unnamed arguments. This can be calculated by counting the number of
     registers used for passing named arguments. For example, if the callee
     function is as follows:

         int foo(int a, ...){ ... }

     ... then register R0 is used to access the argument ' a '. The registers
     available for passing unnamed arguments are R1, R2, R3, R4, and R5.

  2. Determine the number and size of the named arguments on the stack.

  3. If the callee has named arguments on the stack, it should copy all of these
     arguments to a location below the current position on the stack, and the
     difference should be the size of the register-saved area plus padding
     (if any is necessary).

     The register-saved area constitutes all the registers that could have
     been used to pass unnamed arguments. If the number of registers forming
     the register-saved area is odd, it requires 4 bytes of padding; if the
     number is even, no padding is required. This is done to ensure an 8-byte
     alignment on the stack.  For example, if the callee is as follows:

       int foo(int a, ...){ ... }

     ... then the named arguments should be copied to the following location:

       current_position - 5 (for R1-R5) * 4 (bytes) - 4 (bytes of padding)

     If the callee is as follows:

        int foo(int a, int b, ...){ ... }

     ... then the named arguments should be copied to the following location:

        current_position - 4 (for R2-R5) * 4 (bytes) - 0 (bytes of padding)

  4. After any named arguments have been copied, copy all the registers that
     could have been used to pass unnamed arguments on the stack. If the number
     of registers is odd, leave 4 bytes of padding and then start copying them
     on the stack; if the number is even, no padding is required. This
     constitutes the register-saved area. If padding is required, ensure
     that the start location of padding is 8-byte aligned.  If no padding is
     required, ensure that the start location of the on-stack copy of the
     first register which might have a variable argument is 8-byte aligned.

  5. Decrement the stack pointer by the size of register saved area plus the
     padding.  For example, if the callee is as follows:

        int foo(int a, ...){ ... } ;

     ... then the decrement value should be the following:

        5 (for R1-R5) * 4 (bytes) + 4 (bytes of padding) = 24 bytes

     The decrement should be performed before the allocframe instruction.
     Increment the stack-pointer back by the same amount before returning
     from the function.
2020-01-20 09:59:56 -06:00
Mark Murray b10a0eb04a [ARM][MVE][Intrinsics] Take abs() of VMINNMAQ, VMAXNMAQ intrinsics' first arguments.
Summary: Fix VMINNMAQ, VMAXNMAQ intrinsics; BOTH arguments have the absolute values taken.

Reviewers: dmgreen, simon_tatham

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72830
2020-01-20 14:33:26 +00:00
Simon Tatham f3e73e88fd [ARM,MVE] Fix confusing MC names for MVE VMINA/VMAXA insns.
Summary:
A recent commit accidentally defined names like `MVE_VMAXAs8` as
instances of the multiclass `MVE_VMINA`, and vice versa. This has no
effect on the test suite, because nothing directly refers to those
instruction names (the isel patterns are generated in Tablegen using
`!cast<Instruction>(NAME)` inside a lower-level multiclass). But it
means that `llvm-mc -show-inst` was listing VMAXA as VMINA, and it
would also affect any further draft code gen patches that use those
instruction ids.

Reviewers: MarkMurrayARM, dmgreen, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73034
2020-01-20 13:25:52 +00:00
Andrzej Warzynski 7e717b3990 [AArch64][SVE] Extend int_aarch64_sve_ld1_gather_imm
The ACLE distinguishes between the following addressing modes for gather
loads:
  * "scalar base, vector offset", and
  * "vector base, scalar offset".
For the "vector base, scalar offset" case, the
`int_aarch64_sve_ld1_gather_imm` intrinsic was added in 79f2422d.
Currently, that intrinsic assumes that the scalar offset is passed as an
immediate.  As a result, it does not cater for cases where scalar offset
is stored in a register.

In this patch `int_aarch64_sve_ld1_gather_imm` is extended so that all
cases are covered:
* `int_aarch64_sve_ld1_gather_imm` is renamed as
  `int_aarch64_sve_ld1_gather_scalar_offset`
* new DAG combine rules are added for GLD1_IMM for scenarios where the
  offset is a non-immediate scalar or an out-of-range immediate
* sve-intrinsics-gather-loads-vector-base.ll is renamed as
  sve-intrinsics-gather-loads-vector-base-imm-offset.ll
* sve-intrinsics-gather-loads-vector-base-scalar-offset.ll is added to test
  file for non-immediate offsets

Similar changes are made for scatter store intrinsics.

Reviewed By: sdesmalen, efriedma

Differential Revision: https://reviews.llvm.org/D71773
2020-01-20 12:19:18 +00:00
Simon Pilgrim eaa4548459 [X86][SSE] Add PACKSS SimplifyMultipleUseDemandedBits 'sign bit' handling.
Attempt to use SimplifyMultipleUseDemandedBits to simplify PACKSS if we're only after the sign bit.
2020-01-20 10:48:54 +00:00
Sjoerd Meijer 8cba99e2aa [ARM][MVE] Tail-Predication: rematerialise iteration count in exit blocks
This patch uses helper function rewriteLoopExitValues that is refactored in
D72602 to rematerialise the iteration count in exit blocks, so that we can
clean-up loop update expressions inside the hardware-loops later in
ARMLowOverheadLoops, which is necessary to get actual performance gains for
tail-predicated loops.

Differential Revision: https://reviews.llvm.org/D72714
2020-01-20 10:26:36 +00:00
Georgii Rymar 11e8e32444 [llvm-mc] - Produce R_X86_64_PLT32 relocation for branches with JCC opcodes too.
The idea is to produce R_X86_64_PLT32 instead of
R_X86_64_PC32 for branches.

It fixes https://bugs.llvm.org/show_bug.cgi?id=44397.

This patch teaches MC to do that for JCC (jump if condition is met)
instructions. The new behavior matches modern GNU as.
It is similar to D43383, which did the same for "call/jmp foo",
but missed JCC cases.

Differential revision: https://reviews.llvm.org/D72831
2020-01-20 11:42:19 +03:00
David Green ff2e67a4f7 [ARM] MVE VLDn postinc
This adds Post inc variants of the VLD2/4 and VST2/4 instructions in
MVE. It uses the same mechanism/nodes as Neon, transforming the
intrinsic+add pair into a ARMISD::VLD2_UPD, which gets selected to a
post-inc instruction. The code to do that is mostly taken from the
existing Neon code, but simplified as less variants are needed.

It also fills in some getTgtMemIntrinsic for the arm.mve.vld2/4
instrinsics, which allow the nodes to have MMO's, calculated as the full
length to the memory being loaded/stored.

Differential Revision: https://reviews.llvm.org/D71194
2020-01-20 06:57:07 +00:00
David Green 5e51f75542 [ARM] Favour post inc for MVE loops
We were previously not necessarily favouring postinc for the MVE loads
and stores, leading to extra code prior to the loop to set up the
preinc. MVE in general can benefit from postinc (as we don't have
unrolled loops), and certain instructions like the VLD2's only post-inc
versions are available.

Differential Revision: https://reviews.llvm.org/D70790
2020-01-20 06:57:07 +00:00
Michael Liao 819421745c Reorder targets in alphabetical order. NFC. 2020-01-19 21:11:54 -05:00
Florian Hahn 0ee1db2d1d [X86] Try to avoid casts around logical vector ops recursively.
Currently PromoteMaskArithemtic only looks at a single operation to
skip casts. This means we miss cases where we combine multiple masks.

This patch updates PromoteMaskArithemtic to try to recursively promote
AND/XOR/AND nodes that terminate in truncates of the right size or
constant vectors.

Reviewers: craig.topper, RKSimon, spatel

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D72524
2020-01-19 17:22:43 -08:00
Fangrui Song 8e8a75ad50 [TargetRegisterInfo] Default trackLivenessAfterRegAlloc() to true
Except AMDGPU/R600RegisterInfo (a bunch of MIR tests seem to have
problems), every target overrides it with true. PostMachineScheduler
requires livein information. Not providing it can cause assertion
failures in ScheduleDAGInstrs::addSchedBarrierDeps().
2020-01-19 14:20:37 -08:00
Craig Topper 5fa2022ec0 [X86] Remove X86ISD::FILD_FLAG and stop gluing nodes together.
Summary:
I think whatever problem the gluing was fixing has long since been fixed. We don't have any of the restrictions on FP stack stuff that existed back when this was first added.

I had to change which type we use for FILD in BuildFILD when X86 was enabled because most of the isel patterns block f32/f64 instructions when SSE1/SSE2 are enabled. So I needed to use the f80 pattern, but this shouldn't have an effect the generated code since there is only one FILD instruction anyway. We already use f80 explicitly in other other places.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: andrew.w.kaylor, scanon, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72805
2020-01-18 23:44:05 -06:00
Fangrui Song 0cb415c189 [X86][BranchAlign] Suppress branch alignment for {,_}__tls_get_addr
The x86-64 General Dynamic TLS code sequence uses prefixes to allow
linker relaxation.  Adding segment override prefix or NOPs can break
linker relaxation (ld -pie/-no-pie).

i386 General Dynamic and x86-64 Local Dynamic do not use prefixes, but
for simplicity, just disable auto padding consistently.

Reviewed By: skan, LuoYuanke

Differential Revision: https://reviews.llvm.org/D72878
2020-01-18 18:14:51 -08:00
Simon Pilgrim 69bc450882 [X86] Rename lowerShuffleAsRotate -> lowerShuffleAsVALIGN
Since it can only ever create VALIGN nodes.
2020-01-18 11:29:14 +00:00
Michael Liao 6d0d86a64d [DAG] Add helper for creating constant vector index with correct type. NFC. 2020-01-18 01:23:36 -05:00
Derek Schuff ff171acf84 [WebAssembly] Track frame registers through VReg and local allocation
This change has 2 components:

Target-independent: add a method getDwarfFrameBase to TargetFrameLowering. It
describes how the Dwarf frame base will be encoded.  That can be a register (the
default), the CFA (which replaces NVPTX-specific logic in DwarfCompileUnit), or
a DW_OP_WASM_location descriptr.

WebAssembly: Allow WebAssemblyFunctionInfo::getFrameRegister to return the
correct virtual register instead of FP32/SP32 after WebAssemblyReplacePhysRegs
has run.  Make WebAssemblyExplicitLocals store the local it allocates for the
frame register. Use this local information to implement getDwarfFrameBase

The result is that the DW_AT_frame_base attribute is correctly encoded for each
subprogram, and each param and local variable has a correct DW_AT_location that
uses DW_OP_fbreg to refer to the frame base.

This is a reland of rG3a05c3969c18 with fixes for the expensive-checks
and Windows builds

Differential Revision: https://reviews.llvm.org/D71681
2020-01-17 17:23:56 -08:00
Matt Arsenault a4451d88ee Consolidate internal denormal flushing controls
Currently there are 4 different mechanisms for controlling denormal
flushing behavior, and about as many equivalent frontend controls.

- AMDGPU uses the fp32-denormals and fp64-f16-denormals subtarget features
- NVPTX uses the nvptx-f32ftz attribute
- ARM directly uses the denormal-fp-math attribute
- Other targets indirectly use denormal-fp-math in one DAGCombine
- cl-denorms-are-zero has a corresponding denorms-are-zero attribute

AMDGPU wants a distinct control for f32 flushing from f16/f64, and as
far as I can tell the same is true for NVPTX (based on the attribute
name).

Work on consolidating these into the denormal-fp-math attribute, and a
new type specific denormal-fp-math-f32 variant. Only ARM seems to
support the two different flush modes, so this is overkill for the
other use cases. Ideally we would error on the unsupported
positive-zero mode on other targets from somewhere.

Move the logic for selecting the flush mode into the compiler driver,
instead of handling it in cc1. denormal-fp-math/denormal-fp-math-f32
are now both cc1 flags, but denormal-fp-math-f32 is not yet exposed as
a user flag.

-cl-denorms-are-zero, -fcuda-flush-denormals-to-zero and
-fno-cuda-flush-denormals-to-zero will be mapped to
-fp-denormal-math-f32=ieee or preserve-sign rather than the old
attributes.

Stop emitting the denorms-are-zero attribute for the OpenCL flag. It
has no in-tree users. The meaning would also be target dependent, such
as the AMDGPU choice to treat this as only meaning allow flushing of
f32 and not f16 or f64. The naming is also potentially confusing,
since DAZ in other contexts refers to instructions implicitly treating
input denormals as zero, not necessarily flushing output denormals to
zero.

This also does not attempt to change the behavior for the current
attribute. The LangRef now states that the default is ieee behavior,
but this is inaccurate for the current implementation. The clang
handling is slightly hacky to avoid touching the existing
denormal-fp-math uses. Fixing this will be left for a future patch.

AMDGPU is still using the subtarget feature to control the denormal
mode, but the new attribute are now emitted. A future change will
switch this and remove the subtarget features.
2020-01-17 20:09:53 -05:00
Matt Arsenault 592de0009f AMDGPU/GlobalISel: Select llvm.amdgcn.update.dpp
The existing test is overly reliant on -mattr=-flat-for-global, and
some missing optimizations to re-use.
2020-01-17 20:09:53 -05:00
Matt Arsenault ec9628318d AMDGPU/GlobalISel: Select DS append/consume 2020-01-17 20:09:53 -05:00
Evgenii Stepanov d081962dea Merge memtag instructions with adjacent stack slots.
Summary:
Detect a run of memory tagging instructions for adjacent stack frame slots,
and replace them with a shorter instruction sequence
* replace STG + STG with ST2G
* replace STGloop + STGloop with STGloop

This code needs to run when stack slot offsets are already known, but before
FrameIndex operands in STG instructions are eliminated; that's the
reason for the new hook in PrologueEpilogue.

This change modifies STGloop and STZGloop pseudos to take the size as an
immediate integer operand, and adds _untied variants of those pseudos
that are allowed to take the base address as a FI operand. This is needed to
simplify recognizing an STGloop instruction as operating on a stack slot
post-regalloc.

This improves memtag code size by ~0.25%, and it looks like an additional ~0.1%
is possible by rearranging the stack frame such that consecutive STG
instructions reference adjacent slots (patch pending).

Reviewers: pcc, ostannard

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70286
2020-01-17 15:19:29 -08:00
Mike Lambert fe085be125 [Hexagon] Use itinerary for assembler HVX resource checking 2020-01-17 13:14:04 -06:00
Stanislav Mekhanoshin eebdd85e7d [AMDGPU] allow multi-dword flat scratch access since GFX9
This is supported starting with GFX9.

Differential Revision: https://reviews.llvm.org/D72865
2020-01-17 10:47:03 -08:00
Brian Cain c1873631d0 [Hexagon] Refactor HexagonShuffle
The check() in HexagonShuffle has been decomposed into smaller steps.
No functionality change is intended with this commit.
2020-01-17 12:22:07 -06:00
David Spickett 398dc06ad0 [AArch64] Make AArch64 specific assembly directives case insensitive
Differential Revision: https://reviews.llvm.org/D72923
2020-01-17 16:16:18 +00:00
Matt Arsenault 886f9071c6 AMDGPU: Don't assert on a16 images on targets without FeatureR128A16
Currently the lowering for i16 image coordinates asserts on gfx10. I'm
somewhat confused by this though. The feature is missing from the
gfx10 feature lists, but the a16 bit appears to be present in the
manual for MIMG instructions.
2020-01-17 11:07:00 -05:00
Sanjay Patel 43f60e614a [x86] try harder to form 256-bit unpck*
This is another part of a problem noted in PR42024:
https://bugs.llvm.org/show_bug.cgi?id=42024

The AVX2 code may use awkward 256-bit shuffles vs. the AVX code that gets split
into the expected 128-bit unpack instructions. We have to be selective in
matching the types where we try to do this though. Otherwise, we can end up
with more instructions (in the case of v8x32/v4x64).

Differential Revision: https://reviews.llvm.org/D72575
2020-01-17 10:42:39 -05:00
Krzysztof Parzyszek 2d5bfc6eb1 [Hexagon] Improve HVX version checks 2020-01-17 09:40:26 -06:00
Krzysztof Parzyszek 60aed6a4e5 [Hexagon] Add prev65 subtarget feature
There was a change to trap1 instruction between v62 and v65. This
feature will allow the assembler/disassembler to handle different
variants depending on the CPU version.
2020-01-17 09:27:27 -06:00
Simon Pilgrim 8eb4d25a09 [X86] Split X87/SSE compare classes into WriteFCom + WriteFComX
Most X87 compare instructions write to the X87 status word, while the SSE (U)COMI compares write to rFLAGS. These are often handled very differently on CPUs (e.g. rFLAGS outputs typically involve a fpu2gpr transfer), and we shouldn't be grouping all these instructions behind a single class - so this patch splits off the SSE compares into a new WriteFComX class (and currently keeps the same behaviours). If there's a need to distinguish between X87 instructions more closely we can investigate that in the future, but as we don't handle any of the X87 side effects at the moment its unlikely to have any notable effect.
2020-01-17 13:53:58 +00:00
Sam Parker 42350cd893 [ARM][MVE] Tail Predicate IsSafeToRemove
Introduce a method to walk through use-def chains to decide whether
it's possible to remove a given instruction and its users. These
instructions are then stored in a set until the end of the transform
when they're erased. This is now used to perform checks on the
iteration count (LoopDec chain), element count (VCTP chain) and the
possibly redundant iteration count.

As well as being able to remove chains of instructions, we know also
check that the sub feeding the vctp is producing the expected value.

Differential Revision: https://reviews.llvm.org/D71837
2020-01-17 13:19:14 +00:00
Cullen Rhodes 49edf9a509 [AArch64][SVE] Add break intrinsics
Summary:
Implements the following intrinsics:

    * @llvm.aarch64.sve.brka
    * @llvm.aarch64.sve.brka.z
    * @llvm.aarch64.sve.brkb
    * @llvm.aarch64.sve.brkb.z
    * @llvm.aarch64.sve.brkn.z
    * @llvm.aarch64.sve.brkpa.z
    * @llvm.aarch64.sve.brkpb.z

Reviewers: sdesmalen, efriedma, dancgr, mgudim, cameron.mcinally, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72393
2020-01-17 11:47:08 +00:00
Kerry McLaughlin fe3bb8ec96 [AArch64][SVE] Add ImmArg property to intrinsics with immediates
Summary:
Several SVE intrinsics with immediate arguments (including those
added by D70253 & D70437) do not use the ImmArg property.
This patch adds ImmArg<Op> where required and changes
the appropriate patterns which match the immediates.

Reviewers: efriedma, sdesmalen, andwar, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72612
2020-01-17 10:47:55 +00:00
Craig Topper caee96031d [Transforms][RISCV] Remove a "using namespace llvm" from an include file. Fix a place that became dependent on it.
This include file was created in October and has a "using namespace llvm". This seems to get exposed to other include files and finally onto cpp files. While this somewhat okay for llvm itself, its bad for other projects that use llvm as a library and includes a header file that picks this up. This was found by ISPC which has some class names at gloal scope with the same names as LLVM.

It looks like RISCV accidentally became dependent on this. I fixed it by reordering some includes in the RISCV code, but maybe we want to change the TableGenEmitter to put "namespace llvm {" in the generated file instead? But we probably want to do the simplest thing first so we can merge it to 10.0.

Differential Revision: https://reviews.llvm.org/D72895
2020-01-16 20:50:41 -08:00
Matt Arsenault 117d4f1900 AMDGPU: Add register classes to MUBUF load patterns 2020-01-16 22:00:44 -05:00
Zakk Chen cef838e65f Revert "[RISCV] Support ABI checking with per function target-features"
This reverts commit 7bc58a779a.
It breaks EXPENSIVE_CHECKS on Windows
2020-01-16 18:01:07 -08:00
Jessica Paquette b82d18e1e8 [AArch64][GlobalISel] Change G_FCONSTANTs feeding into stores into G_CONSTANTS
Given the following situation:

x = G_FCONSTANT (something that can't be materialized)
G_STORE x, some_addr

We know that x must be materialized as at least a single mov. However, at the
time of selection, the G_STORE will have been regbankselected to a FPR store.

So, as a result, you'll get an unnecessary fmov into the G_STORE.

Storing a constant value in a GPR and a constant value in a FPR are the same.
So, whenever you see a G_FCONSTANT that feeds into only G_STORES, so might as
well make it a G_CONSTANT.

This adds a target-specific combine which changes G_FCONSTANTs feeding into
G_STOREs into G_CONSTANTs.

Differential Revision: https://reviews.llvm.org/D72814
2020-01-16 15:18:44 -08:00
Derek Schuff 80906d9d16 Revert "[WebAssembly] Track frame registers through VReg and local allocation"
This reverts commit 3a05c3969c.
It breaks under expensive-checks and on Windows
2020-01-16 14:38:00 -08:00
Matt Arsenault 91e758b732 AMDGPU: Move permlane discard vdst_in optimization
This case can be handled as a regular selection pattern, so move it
out of the weird post-isel folding code which doesn't have an exactly
equivalent place in GlobalISel.

I think it doesn't make much sense to do this optimization here
though, and it would be more useful in instcombine. There's not really
any new information that will be gained during lowering since these
inputs were known from the beginning.
2020-01-16 17:27:53 -05:00
Derek Schuff 3a05c3969c [WebAssembly] Track frame registers through VReg and local allocation
This change has 2 components:

Target-independent: add a method getDwarfFrameBase to TargetFrameLowering. It
describes how the Dwarf frame base will be encoded.  That can be a register (the
default), the CFA (which replaces NVPTX-specific logic in DwarfCompileUnit), or
a DW_OP_WASM_location descriptr.

WebAssembly: Allow WebAssemblyFunctionInfo::getFrameRegister to return the
correct virtual register instead of FP32/SP32 after WebAssemblyReplacePhysRegs
has run.  Make WebAssemblyExplicitLocals store the local it allocates for the
frame register. Use this local information to implement getDwarfFrameBase

The result is that the DW_AT_frame_base attribute is correctly encoded for each
subprogram, and each param and local variable has a correct DW_AT_location that
uses DW_OP_fbreg to refer to the frame base.

Differential Revision: https://reviews.llvm.org/D71681
2020-01-16 13:51:17 -08:00
Matt Arsenault f5d98543b8 AMDGPU: Remove outdated comment 2020-01-16 14:54:27 -05:00
Matt Arsenault e12b840abf AMDGPU/GlobalISel: Improve lowering of G_SEXT_INREG
Clamping the scalar is much better than lowering with superwide shifts
for types > s64.
2020-01-16 14:29:37 -05:00
Krzysztof Parzyszek 5f65065437 [Hexagon] Update autogeneated intrinsic information in LLVM 2020-01-16 13:11:18 -06:00
Matt Arsenault 0d0fce42b0 GlobalISel: Preserve load/store metadata in IRTranslator
This was dropping the invariant metadata on dead argument loads, so
they weren't deleted.

Atomics still need to be fixed the same way. Also, apparently store
was never preserving dereferencable which should also be fixed.
2020-01-16 13:49:43 -05:00
Krzysztof Parzyszek 8ee2d16896 [Hexagon] Add a target feature to disable compound instructions
This affects the following instructions:
Tag: M4_mpyrr_addr     Syntax: Ry32 = add(Ru32,mpyi(Ry32,Rs32))
Tag: M4_mpyri_addr_u2  Syntax: Rd32 = add(Ru32,mpyi(#u6:2,Rs32))
Tag: M4_mpyri_addr     Syntax: Rd32 = add(Ru32,mpyi(Rs32,#u6))
Tag: M4_mpyri_addi     Syntax: Rd32 = add(#u6,mpyi(Rs32,#U6))
Tag: M4_mpyrr_addi     Syntax: Rd32 = add(#u6,mpyi(Rs32,Rt32))
Tag: S4_addaddi        Syntax: Rd32 = add(Rs32,add(Ru32,#s6))
Tag: S4_subaddi        Syntax: Rd32 = add(Rs32,sub(#s6,Ru32))
Tag: S4_or_andix       Syntax: Rx32 = or(Ru32,and(Rx32,#s10))
Tag: S4_andi_asl_ri    Syntax: Rx32 = and(#u8,asl(Rx32,#U5))
Tag: S4_ori_asl_ri     Syntax: Rx32 = or(#u8,asl(Rx32,#U5))
Tag: S4_addi_asl_ri    Syntax: Rx32 = add(#u8,asl(Rx32,#U5))
Tag: S4_subi_asl_ri    Syntax: Rx32 = sub(#u8,asl(Rx32,#U5))
Tag: S4_andi_lsr_ri    Syntax: Rx32 = and(#u8,lsr(Rx32,#U5))
Tag: S4_ori_lsr_ri     Syntax: Rx32 = or(#u8,lsr(Rx32,#U5))
Tag: S4_addi_lsr_ri    Syntax: Rx32 = add(#u8,lsr(Rx32,#U5))
Tag: S4_subi_lsr_ri    Syntax: Rx32 = sub(#u8,lsr(Rx32,#U5))
2020-01-16 12:37:30 -06:00
stevewan bed7626f04 [PowerPC][AIX] Make PIC the default relocation model for AIX
Summary:
The `llc` tool currently defaults to Static relocation model and generates non-relocatable code for 32-bit Power.
This is not desirable on AIX where we always generate Position Independent Code (PIC). This patch makes PIC the default relocation model for AIX.

Reviewers: daltenty, hubert.reinterpretcast, DiggerLin, Xiangling_L, sfertile

Reviewed By: hubert.reinterpretcast

Subscribers: mgorny, wuzish, nemanjai, hiraditya, kbarton, jsji, shchenz, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72479
2020-01-16 13:07:36 -05:00
Matt Arsenault 4ca1ad85b7 AMDGPU/GlobalISel: Don't handle legacy buffer intrinsic 2020-01-16 11:31:12 -05:00
Matt Arsenault 9b2f3532c7 AMDGPU/GlobalISel: Select DS GWS intrinsics 2020-01-16 11:25:10 -05:00
Jay Foad 63f73545dd [GlobalISel] Pass MachineOperands into MachineIRBuilder helper methods
Reviewers: arsenm, aditya_nandakumar, aemerson

Subscribers: wdng, rovka, hiraditya, volkan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72849
2020-01-16 16:04:21 +00:00
Sam Parker 760b175109 [ARM][LowOverheadLoops] Update liveness info
Recommitting e93e0d413f after reverting due to test failures, which
will hopefully now be fixed. Original commit message:

After expanding the pseudo instructions, update the liveness info.
We do this in a post-order traversal of the loop, including its
exit blocks and preheader(s).

Differential Revision: https://reviews.llvm.org/D72131
2020-01-16 15:44:25 +00:00
Anna Welker c24cf97960 [ARM][MVE] Enable extending gathers
Enables the masked gather pass to
create extending masked gathers.

Differential Revision: https://reviews.llvm.org/D72451
2020-01-16 15:24:54 +00:00
Kazushi (Jam) Marukawa 773ae62ff8 [VE] i64 arguments, return values and constants
Summary: Support for i64 arguments (in register), return values and constants along with tests.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D72776
2020-01-16 10:09:50 +01:00
Liu, Chen3 8fdafb7dce Insert wait instruction after X87 instructions which could raise
float-point exception.

This patch also modify some mayRaiseFPException flag which set in D68854.

Differential Revision: https://reviews.llvm.org/D72750
2020-01-16 12:12:51 +08:00
Matt Arsenault c378e52cb9 Set some fast math attributes in setFunctionAttributes
This will provide a more consistent view to codegen for these
attributes. The current system is somewhat awkward, and the fields in
TargetOptions are reset based on the command line flag if the
attribute isn't set. By forcing these attributes with the flag, there
can never be an inconsistency in the behavior if code directly
inspects the attribute on the function without considering the command
line flags.
2020-01-15 22:23:18 -05:00
Craig Topper e445447921 [X86] When handling i64->f32 sint_to_fp on 32-bit targets only bitcast to f64 if sse2 is enabled.
The code is trying to copy the i64 value to an xmm register to
use a 64-bit store so that the 64-bit fild can benefit from
store forwarding.

But this trick only works if f64 is going to be stored in an
XMM register. If we only have SSE1 then only float is in xmm
register. So this trick just causes 2 stores i32 stores, an f64
load into the x87, an f64 from x87, and a 64-bit fild. So we end
up with an extra stack temporary and still didn't get store forwarding.

We might be able to use v2f32 here instead, but I didn't check. I
just wanted the code to make sense.

Found by inspection as I continue to stare too hard at our
int_to_fp conversions.
2020-01-15 18:26:28 -08:00
Matt Arsenault 711a17afaf AMDGPU/GlobalISel: Select exp with patterns
This does produce slightly different code. Now a unique IMPLICIT_DEF
is emitted for each of the implicit_def operands, rather than reusing
the same one.
2020-01-15 18:33:15 -05:00
Matt Arsenault eef92f25cc AMDGPU: Remove custom node for exports
I'm mildly worried about potentially reordering exp/exp_done with
IntrWriteMem on the intrinsic.

Requires hacking out the illegal type on SI, so manually select that
case during lowering.
2020-01-15 18:33:15 -05:00
Mircea Trofin 5466597fee [NFC] Refactor InlineResult for readability
Summary:
InlineResult is used both in APIs assessing whether a call site is
inlinable (e.g. llvm::isInlineViable) as well as in the function
inlining utility (llvm::InlineFunction). It means slightly different
things (can/should inlining happen, vs did it happen), and the
implicit casting may introduce ambiguity (casting from 'false' in
InlineFunction will default a message about hight costs,
which is incorrect here).

The change renames the type to a more generic name, and disables
implicit constructors.

Reviewers: eraman, davidxl

Reviewed By: davidxl

Subscribers: kerbowa, arsenm, jvesely, nhaehnle, eraman, hiraditya, haicheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72744
2020-01-15 13:34:20 -08:00
Amara Emerson 2e39ea726e Revert "Revert rG6078f2fedcac5797ac39ee5ef3fd7a35ef1202d5 - "[AArch64][GlobalISel]: Support @llvm.{return,frame}address selection.""
The original change wasn't constraining the operand regclasses which broke EXPENSIVE_CHECKS.
2020-01-15 10:13:11 -08:00
Mark Murray da9d57d2c2 [ARM][MVE][Intrinsics] Add VMINAQ, VMINNMAQ, VMAXAQ, VMAXNMAQ intrinsics.
Summary: Add VMINAQ, VMINNMAQ, VMAXAQ, VMAXNMAQ intrinsics and unit tests.

Reviewers: simon_tatham, miyuki, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72761
2020-01-15 17:20:15 +00:00
Matt Arsenault 936483fb7d GlobalISel: Implement lower for G_BITCAST
Bitcast only really applies between scalars and vectors. Implement as
an unmerge and remerge. The test needs to tolerate failure since one
of the unmerges currently fails to legalize.
2020-01-15 08:58:58 -05:00
Matt Arsenault bd7658a212 AMDGPU: Partially directly select llvm.amdgcn.interp.p1.f16
The 16 bank LDS case is complicated due to using multiple
instructions. If I attempt to write a pattern for it, the generated
selector incorrectly places the copy to m0 after the first
instruction, so that needs to be separately addressed.

Also fix not gluing the copy to m0 to the second operation in the
second half of the 16 bank lowering.
2020-01-15 08:58:58 -05:00
Nemanja Ivanovic 9c64f04df8 [PowerPC] Legalize saturating vector add/sub
These intrinsics and the corresponding ISD nodes were recently added. PPC has
instructions that do this for vectors. Legalize them and add patterns to emit
the satuarting instructions.

Differential revision: https://reviews.llvm.org/D71940
2020-01-15 07:00:38 -06:00
Simon Pilgrim e26a78e708 Revert rG6078f2fedcac5797ac39ee5ef3fd7a35ef1202d5 - "[AArch64][GlobalISel]: Support @llvm.{return,frame}address selection."
These intrinsics expand to a variable number of instructions so just like in
ISelLowering.cpp we use custom code to deal with them.

Committing Tim's original patch.

Differential Revision: https://reviews.llvm.org/D65656
----
Breaks EXPENSIVE_CHECKS builds.
2020-01-15 12:37:37 +00:00
Zakk Chen 7bc58a779a [RISCV] Support ABI checking with per function target-features
if users don't specific -mattr, the default target-feature come
from IR attribute.

Reviewers: lenary, asb

Reviewed By: lenary, asb

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70837
2020-01-15 04:35:01 -08:00
Zakk Chen 3bc2860e92 Revert "[RISCV] Support ABI checking with per function target-features"
This reverts commit 109e4d12ed.
2020-01-15 04:32:57 -08:00
Simon Pilgrim 7b15865225 Fix "pointer is null" static analyzer warning. NFCI.
Use cast<> instead of dyn_cast<> since the pointer is always dereferenced and cast<> will perform the null assertion for us.
2020-01-15 12:18:11 +00:00
Benjamin Kramer 06cfcdcca7 [AArch64][SVE] Fold variable into assert to silence unused variable warnings in Release builds 2020-01-15 12:50:27 +01:00
Cullen Rhodes 93a4dede3a [AArch64][SVE] Add ptest intrinsics
Summary:
Implements the following intrinsics:

    * @llvm.aarch64.sve.ptest.any
    * @llvm.aarch64.sve.ptest.first
    * @llvm.aarch64.sve.ptest.last

Reviewers: sdesmalen, efriedma, dancgr, mgudim, cameron.mcinally, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72398
2020-01-15 11:15:01 +00:00
Zakk Chen 109e4d12ed [RISCV] Support ABI checking with per function target-features
if users don't specific -mattr, the default target-feature come
from IR attribute.
2020-01-15 02:30:43 -08:00
cdevadas 0dc6c249bf [AMDGPU] Invert the handling of skip insertion.
The current implementation of skip insertion (SIInsertSkip) makes it a
mandatory pass required for correctness. Initially, the idea was to
have an optional pass. This patch inserts the s_cbranch_execz upfront
during SILowerControlFlow to skip over the sections of code when no
lanes are active. Later, SIRemoveShortExecBranches removes the skips
for short branches, unless there is a sideeffect and the skip branch is
really necessary.

This new pass will replace the handling of skip insertion in the
existing SIInsertSkip Pass.

Differential revision: https://reviews.llvm.org/D68092
2020-01-15 15:18:16 +05:30
Kazushi (Jam) Marukawa 064859bde7 [VE] Minimal codegen for empty functions
Summary:
This patch implements minimal VE code generation for empty function bodies (no args, no value return).

Contents

* empty function code generation test.
* Minimal function prologue & epilogue emission
* Instruction formats and instruction definitions as far as required for the empty function prologue & epilogue.
* I64 register class definitions.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D72598
2020-01-15 09:55:16 +01:00
Craig Topper be8f217b18 [X86] Don't call LowerUINT_TO_FP_i32 for i32->f80 on 32-bit targets with sse2.
We were performing an emulated i32->f64 in the SSE registers, then
storing that value to memory and doing a extload into the X87
domain.

After this patch we'll now just store the i32 to memory along
with an i32 0. Then do a 64-bit FILD to f80 completely in the X87
unit. This matches what we do without SSE.
2020-01-15 00:43:07 -08:00
Justin Hibbits 36eedfcb3c [PowerPC] Fix powerpcspe subtarget enablement in llvm backend
Summary:
As currently written, -target powerpcspe will enable SPE regardless of
disabling the feature later on in the command line.  Instead, change
this to just set a default CPU to 'e500' instead of a generic CPU.

As part of this, add FeatureSPE to the e500 definition.

Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D72673
2020-01-14 22:07:03 -06:00
Tom Stellard 0dbcb36394 CMake: Make most target symbols hidden by default
Summary:
For builds with LLVM_BUILD_LLVM_DYLIB=ON and BUILD_SHARED_LIBS=OFF
this change makes all symbols in the target specific libraries hidden
by default.

A new macro called LLVM_EXTERNAL_VISIBILITY has been added to mark symbols in these
libraries public, which is mainly needed for the definitions of the
LLVMInitialize* functions.

This patch reduces the number of public symbols in libLLVM.so by about
25%.  This should improve load times for the dynamic library and also
make abi checker tools, like abidiff require less memory when analyzing
libLLVM.so

One side-effect of this change is that for builds with
LLVM_BUILD_LLVM_DYLIB=ON and LLVM_LINK_LLVM_DYLIB=ON some unittests that
access symbols that are no longer public will need to be statically linked.

Before and after public symbol counts (using gcc 8.2.1, ld.bfd 2.31.1):
nm before/libLLVM-9svn.so | grep ' [A-Zuvw] ' | wc -l
36221
nm after/libLLVM-9svn.so | grep ' [A-Zuvw] ' | wc -l
26278

Reviewers: chandlerc, beanz, mgorny, rnk, hans

Reviewed By: rnk, hans

Subscribers: merge_guards_bot, luismarques, smeenai, ldionne, lenary, s.egerton, pzheng, sameer.abuasal, MaskRay, wuzish, echristo, Jim, hiraditya, michaelplatings, chapuni, jholewinski, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, javed.absar, sbc100, jgravelle-google, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, zzheng, edward-jones, mgrang, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, kristina, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D54439
2020-01-14 19:46:52 -08:00
Philip Reames 1a7398eca2 [BranchAlign] Add master --x86-branches-within-32B-boundaries flag
This flag was originally part of D70157, but was removed as we carved away pieces of the review. Since we have the nop support checked in, and it appears mature(*), I think it's time to add the master flag. For now, it will default to nop padding, but once the prefix padding support lands, we'll update the defaults.

(*) I can now confirm that downstream testing of the changes which have landed to date - nop padding and compiler support for suppressions - is passing all of the functional testing we've thrown at it. There might still be something lurking, but we've gotten enough coverage to be confident of the basic approach.

Note that the new flag can be used either when assembling an .s file, or when using the integrated assembler directly from the compiler. The later will use all of the suppression mechanism and should always generate correct code. We don't yet have assembly syntax for the suppressions, so passing this directly to the assembler w/a raw .s file may result in broken code. Use at your own risk.

Also note that this isn't the wiring for the clang option. I think the most recent review for that is D72227, but I've lost track, so that might be off.

Differential Revision: https://reviews.llvm.org/D72738
2020-01-14 18:17:53 -08:00
Reid Kleckner 40cd26c700 [Win64] Handle FP arguments more gracefully under -mno-sse
Pass small FP values in GPRs or stack memory according the the normal
convention. This is what gcc -mno-sse does on Win64.

I adjusted the conditions under which we emit an error to check if the
argument or return value would be passed in an XMM register when SSE is
disabled. This has a side effect of no longer emitting an error for FP
arguments marked 'inreg' when targetting x86 with SSE disabled. Our
calling convention logic was already assigning it to FP0/FP1, and then
we emitted this error. That seems unnecessary, we can ignore 'inreg' and
compile it without SSE.

Reviewers: jyknight, aemerson

Differential Revision: https://reviews.llvm.org/D70465
2020-01-14 17:19:35 -08:00
Craig Topper 76291e1158 [X86] Drop an unneeded FIXME. NFC
The extload on X87 is free.
2020-01-14 17:05:46 -08:00
Craig Topper 57eb56b839 [X86] Swap the 0 and the fudge factor in the constant pool for the 32-bit mode i64->f32/f64/f80 uint_to_fp algorithm.
This allows us to generate better code for selecting the fixup
to load.

Previously when the sign was set we had to load offset 0. And
when it was clear we had to load offset 4. This required a testl,
setns, zero extend, and finally a mul by 4. By switching the offsets
we can just shift the sign bit into the lsb and multiply it by 4.
2020-01-14 17:05:23 -08:00
Michael Liao 01a4b83154 [codegen,amdgpu] Enhance MIR DIE and re-arrange it for AMDGPU.
Summary:
- `dead-mi-elimination` assumes MIR in the SSA form and cannot be
  arranged after phi elimination or DeSSA. It's enhanced to handle the
  dead register definition by skipping use check on it. Once a register
  def is `dead`, all its uses, if any, should be `undef`.
- Re-arrange the DIE in RA phase for AMDGPU by placing it directly after
  `detect-dead-lanes`.
- Many relevant tests are refined due to different register assignment.

Reviewers: rampitec, qcolombet, sunfish

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72709
2020-01-14 19:26:15 -05:00
Amara Emerson 6078f2fedc [AArch64][GlobalISel]: Support @llvm.{return,frame}address selection.
These intrinsics expand to a variable number of instructions so just like in
ISelLowering.cpp we use custom code to deal with them.

Committing Tim's original patch.

Differential Revision: https://reviews.llvm.org/D65656
2020-01-14 13:41:21 -08:00
Danilo Carvalho Grael 26d96126a0 [SVE] Add patterns for MUL immediate instruction.
Summary: Add the missing MUL pattern for integer immediate instructions.

Reviewers: sdesmalen, huntergr, efriedma, c-rhodes, kmclaughlin

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72654
2020-01-14 15:26:19 -05:00
lewis-revill cd800f3b22 [RISCV] Allow shrink wrapping for RISC-V
Enabling shrink wrapping requires ensuring the insertion point of the
epilogue is correct for MBBs without a terminator, in which case the
instruction to adjust the stack pointer is the last instruction in the
block.

Differential Revision: https://reviews.llvm.org/D62190
2020-01-14 18:59:11 +00:00
Craig Topper 98c54fb1fe [X86] Directly emit a BROADCAST_LOAD from constant pool in lowerUINT_TO_FP_vXi32 to avoid double loads seen in D71971
By directly emitting the constants as a constant pool load we seem to avoid the build_vector/extract_subvector combines that resulted in the duplicate loads we had before.

Differential Revision: https://reviews.llvm.org/D72307
2020-01-14 10:50:39 -08:00
diggerlin eb23cc136b [AIX][XCOFF] Supporting the ReadOnlyWithRel SectionKnd
SUMMARY:
In this patch we put the global variable in a Csect which's SectionKind is "ReadOnlyWithRel" into Data Section.

Reviewers: hubert.reinterpretcast,jasonliu,Xiangling_L
Subscribers: wuzish, nemanjai, hiraditya

Differential Revision: https://reviews.llvm.org/D72461
2020-01-14 13:21:49 -05:00
Sjoerd Meijer a08c0adee0 [ARM][MVE] VTP Block Pass fix
Fix a missing and broken test: 2 VPT blocks predicated on the same VCMP
instruction that can be folded. The problem was that for each VPT block, we
record the predicate statements with a list, but the same instruction was added
twice. Thus, we were running in an assert trying to remove the same instruction
twice. To avoid this the instructions are now recorded with a set.

Differential Revision: https://reviews.llvm.org/D72699
2020-01-14 16:10:55 +00:00
Sanne Wouda 1cc8fff420 [AArch64] Fix save register pairing for Windows AAPCS
Summary:
On Windows, when a function does not have an unwind table (for example, EH
filtering funclets), we don't correctly pair FP and LR to form the frame record
in all circumstances.

Fix this by invalidating a pair when the second register is FP when compiling
for Windows, even when CFI is not needed.

Fixes PR44271 introduced by D65653.

Reviewers: efriedma, sdesmalen, rovka, rengolin, t.p.northover, thegameg, greened

Reviewed By: rengolin

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71754
2020-01-14 15:08:27 +00:00
Xiangling Liao 25a8aec7f3 [AIX] ExternalSymbolSDNode lowering
For memcpy/memset/memmove etc., replace ExternalSymbolSDNode with a
MCSymbolSDNode, which have a prefix dot before function name as entry
point symbol.

Differential Revision: https://reviews.llvm.org/D70718
2020-01-14 09:39:02 -05:00
Benjamin Kramer df186507e1 Make helper functions static or move them into anonymous namespaces. NFC. 2020-01-14 14:06:37 +01:00
Simon Tatham 71d5454b37 [ARM,MVE] Use the new Tablegen `defvar` and `if` statements.
Summary:
This cleans up a lot of ugly `foreach` bodges that I've been using to
work around the lack of those two language features. Now they both
exist, I can make then all into something more legible!

In particular, in the common pattern in `ARMInstrMVE.td` where a
multiclass defines an `Instruction` instance plus one or more `Pat` that
select it, I've used a `defvar` to wrap `!cast<Instruction>(NAME)` so
that the patterns themselves become a little more legible.

Replacing a `foreach` with a `defvar` removes a level of block
structure, so several pieces of code have their indentation changed by
this patch. Best viewed with whitespace ignored.

NFC: the output of `llvm-tblgen -print-records` on the two affected
Tablegen sources is exactly identical before and after this change, so
there should be no effect at all on any of the other generated files.

Reviewers: MarkMurrayARM, miyuki

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, dmgreen, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72690
2020-01-14 12:08:03 +00:00
Sam Parker e27632c302 [ARM][LowOverheadLoops] Allow all MVE instrs.
We have a whitelist of instructions that we allow when tail
predicating, since these are trivial ones that we've deemed need no
special handling. Now change ARMLowOverheadLoops to allow the
non-trivial instructions if they're contained within a valid VPT
block. Since a valid block is one that is predicated upon the VCTP so
we know that these non-trivial instructions will still behave as
expected once the implicit predication is used instead.

This also fixes a previous test failure.

Differential Revision: https://reviews.llvm.org/D72509
2020-01-14 12:03:58 +00:00
Sam Parker bad6032bc1 [ARM][LowOverheadLoops] Change predicate inspection
Use the already provided helper function to get the operand type so
that we can detect whether the vpr is being used as a predicate or
not. Also use existing helpers to get the predicate indices when we
converting the vpt blocks. This enables us to support both types of
vpr predicate operand.

Differential Revision: https://reviews.llvm.org/D72504
2020-01-14 11:47:34 +00:00
Diogo Sampaio d94d079a6a [ARM][Thumb2] Fix ADD/SUB invalid writes to SP
Summary:
This patch fixes pr23772  [ARM] r226200 can emit illegal thumb2 instruction: "sub sp, r12, #80".
The violation was that SUB and ADD (reg, immediate) instructions can only write to SP if the source register is also SP. So the above instructions was unpredictable.
To enforce that the instruction t2(ADD|SUB)ri does not write to SP we now enforce the destination register to be rGPR (That exclude PC and SP).
Different than the ARM specification, that defines one instruction that can read from SP, and one that can't, here we inserted one that can't write to SP, and other that can only write to SP as to reuse most of the hard-coded size optimizations.
When performing this change, it uncovered that emitting Thumb2 Reg plus Immediate could not emit all variants of ADD SP, SP #imm instructions before so it was refactored to be able to. (see test/CodeGen/Thumb2/mve-stacksplot.mir where we use a subw sp, sp, Imm12 variant )
It also uncovered a disassembly issue of adr.w instructions, that were only written as SUBW instructions (see llvm/test/MC/Disassembler/ARM/thumb2.txt).

Reviewers: eli.friedman, dmgreen, carwil, olista01, efriedma, andreadb

Reviewed By: efriedma

Subscribers: gbedwell, john.brawn, efriedma, ostannard, kristof.beyls, hiraditya, dmgreen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70680
2020-01-14 11:47:19 +00:00
Sam Parker e73b20c57d [ARM][MVE] Disallow VPSEL for tail predication
Due to the current way that we collect predicated instructions, we
can't easily handle vpsel in tail predicated loops. There are a
couple of issues:
1) It will use the VPR as a predicate operand, but doesn't have to be
   instead a VPT block, which means we can assert while building up
   the VPT block because we don't find another VPST to being a new
   one.
2) VPSEL still requires a VPR operand even after tail predicating,
   which means we can't remove it unless there is another
   instruction, such as vcmp, that can provide the VPR def.

The first issue should be a relatively simple fix in the logic of the
LowOverheadLoops pass, whereas the second will require us to
represent the 'implicit' tail predication with an explicit value.

Differential Revision: https://reviews.llvm.org/D72629
2020-01-14 11:41:17 +00:00
Anna Welker 72ca86fd34 [ARM][MVE] Masked gathers from base + vector of offsets
Enables the masked gather pass to create a masked
gather loading from a base and vector of offsets.
This also enables v8i16 and v16i8 gather loads.

Differential Revision: https://reviews.llvm.org/D72330
2020-01-14 10:33:52 +00:00
Stanislav Mekhanoshin ad741853c3 [AMDGPU] Model distance to instruction in bundle
This change allows to model the height of the instruction
within a bundle for latency adjustment purposes.

Differential Revision: https://reviews.llvm.org/D72669
2020-01-14 01:18:59 -08:00
Stanislav Mekhanoshin eca4474587 [AMDGPU] Fix getInstrLatency() always returning 1
We do not have InstrItinerary so generic getInstLatency() was always
defaulting to return 1 cycle. We need to use TargetSchedModel instead
to compute an instruction's latency.

Differential Revision: https://reviews.llvm.org/D72655
2020-01-14 01:08:30 -08:00
Craig Topper b1dcd84c7e [X86] Copy the nofpexcept flag when folding a load into an instruction using the load folding tables./ 2020-01-13 22:02:45 -08:00
Eli Friedman e68e4cbcc5 [GlobalISel] Change representation of shuffle masks in MachineOperand.
We're planning to remove the shufflemask operand from ShuffleVectorInst
(D72467); fix GlobalISel so it doesn't depend on that Constant.

The change to prelegalizercombiner-shuffle-vector.mir happens because
the input contains a literal "-1" in the mask (so the parser/verifier
weren't really handling it properly). We now treat it as equivalent to
"undef" in all contexts.

Differential Revision: https://reviews.llvm.org/D72663
2020-01-13 16:55:41 -08:00
Fangrui Song 64a93afc3c [X86][Disassembler] Fix a bug when disassembling an empty string
readPrefixes() assumes insn->bytes is non-empty. The code path is not
exercised in llvm-mc because llvm-mc does not feed empty input to
MCDisassembler::getInstruction().

This bug is uncovered by a5994c789a.
An empty string did not crash before because the deleted regionReader()
allowed UINT64_C(-1) as insn->readerCursor.

  Bytes.size() <= Address -> R->Base
  0 <= UINT64_C(-1) - UINT32_C(-1)
2020-01-13 10:42:21 -08:00
Matt Arsenault 203801425d AMDGPU/GlobalISel: Select llvm.amdgcn.ds.ordered.{add|swap} 2020-01-13 13:09:38 -05:00
Matt Arsenault 3d8f1b2d22 AMDGPU/GlobalISel: Set insert point after waterfall loop
The current users of the waterfall loop utility functions do not make
use of the restored original insert point. The insertion is either
done, or they set the insert point somewhere else. A future change
will want to insert instructions after the waterfall loop, but
figuring out the point after the loop is more difficult than ensuring
the insert point is there after the loop.
2020-01-13 12:51:05 -05:00
Matt Arsenault ca19d7a399 AMDGPU/GlobalISel: Fix branch targets when emitting SI_IF
The branch target needs to be changed depending on whether there is an
unconditional branch or not.

Loops also need to be similarly fixed, but compiling a simple testcase
end to end requires another set of patches that aren't upstream yet.
2020-01-13 12:51:05 -05:00
Matt Arsenault 7d9b0a61c3 AMDGPU/GlobalISel: Simplify assert 2020-01-13 12:51:05 -05:00
David Green 90555d9253 [Scheduler] Remove superfluous casts. NFC 2020-01-13 16:34:13 +00:00
Danilo Carvalho Grael 2d7e757a83 [AArch64][SVE] Add patterns for some arith SVE instructions.
Summary: Add patterns for the following instructions:
- smax, smin, umax, umin

Reviewers: sdesmalen, huntergr, rengolin, efriedma, c-rhodes, mgudim, kmclaughlin

Subscribers: amehsan

Differential Revision: https://reviews.llvm.org/D71779
2020-01-13 11:39:42 -05:00
Luís Marques 043c5eafa8 [RISCV] Handle globals and block addresses in asm operands
Summary: These seem to be the machine operand types currently needed by the
RISC-V target.

Reviewers: asb, lenary
Reviewed By: lenary
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72275
2020-01-13 15:34:56 +00:00
Pablo Barrio da33762de8 [AArch64] Emit HINT instead of PAC insns in Armv8.2-A or below
Summary:
The Pointer Authentication Extension (PAC) was added in Armv8.3-A. Some
instructions are implemented in the HINT space to allow compiling code
common to CPUs regardless of whether they feature PAC or not, and still
benefit from PAC protection in the PAC-enabled CPUs.

The 8.3-specific mnemonics were currently enabled in any architecture, and
LLVM was emitting them in assembly files when PAC code generation was
enabled. This was ok for compilations where both LLVM codegen and the
integrated assembler were used. However, the LLVM codegen was not
compatible with other assemblers (e.g. GAS). Given the fact that the
approach from these assemblers (i.e. to disallow Armv8.3-A mnemonics if
compiling for Armv8.2-A or lower) is entirely reasonable, this patch makes
LLVM to emit HINT when building for Armv8.2-A and below, instead of
PACIASP, AUTIASP and friends. Then, LLVM assembly should be compatible
with other assemblers.

Reviewers: samparker, chill, LukeCheeseman

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71658
2020-01-13 14:14:48 +00:00
Alex Richardson 8e8ccf4712 [MIPS] Don't emit R_(MICRO)MIPS_JALR relocations against data symbols
The R_(MICRO)MIPS_JALR optimization only works when used against functions.
Using the relocation against a data symbol (e.g. function pointer) will
cause some linkers that don't ignore the hint in this case (e.g. LLD prior
to commit 5bab291b7b) to generate a relative branch to the data symbol
which crashes at run time. Before this patch, LLVM was erroneously emitting
these relocations against local-dynamic TLS function pointers and global
function pointers with internal visibility.

Reviewers: atanasyan, jrtc27, vstefanovic
Reviewed By: atanasyan
Differential Revision: https://reviews.llvm.org/D72571
2020-01-13 14:14:03 +00:00
Simon Pilgrim 7f1cf7d5f6 [X86] Fix MSVC "truncation from 'int' to 'bool'" warning. NFCI. 2020-01-13 11:08:12 +00:00
Sjoerd Meijer add04b9653 ARMLowOverheadLoops: return earlier to avoid printing irrelevant dbg msg. NFC 2020-01-13 10:24:10 +00:00
KAWASHIMA Takahiro 10c11e4e2d This option allows selecting the TLS size in the local exec TLS model,
which is the default TLS model for non-PIC objects. This allows large/
many thread local variables or a compact/fast code in an executable.

Specification is same as that of GCC. For example, the code model
option precedes the TLS size option.

TLS access models other than local-exec are not changed. It means
supoort of the large code model is only in the local exec TLS model.

Patch By KAWASHIMA Takahiro (kawashima-fj <t-kawashima@fujitsu.com>)
Reviewers: dmgreen, mstorsjo, t.p.northover, peter.smith, ostannard
Reviewd By: peter.smith
Committed by: peter.smith

Differential Revision: https://reviews.llvm.org/D71688
2020-01-13 10:16:53 +00:00
Sam Elliott c9babcbda7 [RISCV] Collect Statistics on Compressed Instructions
Summary:
It is useful to keep statistics on how many instructions we have
compressed, so we can see if future changes are increasing or decreasing this
number.

Reviewers: asb, luismarques

Reviewed By: asb, luismarques

Subscribers: xbolva00, sameer.abuasal, hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67495
2020-01-13 10:04:05 +00:00
Craig Topper 52aaf4a275 [X86] Use SDNPOptInGlue instead of SDNPInGlue on a couple SDNodes.
At least one of these is used without a Glue. This doesn't seem
to change the X86GenDAGISel.inc output so maybe it doesn't matter?
2020-01-12 21:11:18 -08:00
Matt Arsenault 555e7ee04c AMDGPU/GlobalISel: Don't use XEXEC class for SGPRs
We don't use the xexec register classes for arbitrary values
anymore. Avoids a test variance beween GlobalISel and SelectionDAG>
2020-01-12 22:44:51 -05:00
Matt Arsenault a10527cd37 AMDGPU/GlobalISel: Copy type when inserting readfirstlane
getDefIgnoringCopies will fail to find any def if no type is set if we
try to use it on the use's operand, so propagate the type.
2020-01-12 22:44:51 -05:00
James Clarke 0113cf193f [RISCV] Check register class for AMO memory operands
Summary:
AMO memory operands use a custom parser in order to accept both (reg)
and 0(reg). However, the validation predicate used for these operands
was only checking that they were registers, and not the register class,
so non-GPRs (such as FPRs) were also accepted. Thus, fix this by making
the predicate check that they are GPRs.

Reviewers: asb, lenary

Reviewed By: asb, lenary

Subscribers: hiraditya, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72471
2020-01-13 00:50:37 +00:00
Fangrui Song ebd26cc8c4 [PowerPC] Delete PPCDarwinAsmPrinter and PPCMCAsmInfoDarwin
Darwin support has been removed.

Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D72063
2020-01-12 11:02:02 -08:00
Simon Pilgrim 66e39067ed [X86][AVX] Use lowerShuffleAsLanePermuteAndSHUFP to lower binary v4f64 shuffles.
Only perform this if we are shuffling lower and upper lane elements across the lanes (otherwise splitting to lower xmm shuffles would be better).

This is a regression if we shuffle build_vectors due to getVectorShuffle canonicalizing 'blend of splat' build vectors, for now I've set this not to shuffle build_vector nodes at all to avoid this.
2020-01-12 12:29:41 +00:00