Commit Graph

50854 Commits

Author SHA1 Message Date
Petr Hosek fcbec02ea6 [AArch64] Support reserving arbitrary general purpose registers
This is a follow up to D48580 and D48581 which allows reserving
arbitrary general purpose registers with the exception of registers
with special purpose (X8, X16-X18, X29, X30) and registers used by LLVM
(X0, X19). This change also generalizes some of the existing logic to
rely entirely on values generated from tablegen.

Differential Revision: https://reviews.llvm.org/D56305

llvm-svn: 353957
2019-02-13 17:28:47 +00:00
Diana Picus aa4118a873 [ARM GlobalISel] Support G_SELECT for Thumb2
Same as arm mode, but slightly different opcodes.

llvm-svn: 353938
2019-02-13 11:25:32 +00:00
Anton Afanasyev ca9aff9353 [X86][SLP] Enable SLP vectorization for 128-bit horizontal X86 instructions (add, sub)
Try to use 64-bit SLP vectorization. In addition to horizontal instrs
this change triggers optimizations for partial vector operations (for instance,
using low halfs of 128-bit registers xmm0 and xmm1 to multiply <2 x float> by
<2 x float>).

Fixes llvm.org/PR32433

llvm-svn: 353923
2019-02-13 08:26:43 +00:00
Craig Topper 9b61f48e4b [X86] Use default expansion for (i64 fp_to_uint f80) when avx512 is enabled on 64-bit targets to match what happens without avx512.
In 64-bit mode prior to avx512 we use Expand, but with avx512 we need to make f32/f64 conversions Legal so we use Custom and then do our own expansion for f80. But this seems to produce codegen differences relative to avx2. This patch corrects this.

llvm-svn: 353921
2019-02-13 07:42:34 +00:00
Craig Topper 3099e442a6 [X86] Refactor the FP_TO_INTHelper interface. NFCI
-Pull the final stack load creation from the two callers into the helper.
-Return a single SDValue instead of a std::pair.
-Remove the Replace flag which isn't really needed.

llvm-svn: 353920
2019-02-13 07:42:31 +00:00
Matt Arsenault 4cd9509e1d AMDGPU: Try to use function specific ST
Subtargets are a function level property, so ideally we would
eliminate everywhere that needs to check the global one. Rename the
function to try avoiding confusion.

llvm-svn: 353900
2019-02-12 23:44:13 +00:00
Matt Arsenault d24296e282 AMDGPU: Ignore CodeObjectV3 when inlining
This was inhibiting inlining of library functions when clang was
invoking the inliner directly. This is covering a bit of a mess with
subtarget feature handling, and this shouldn't be a subtarget
feature. The behavior is different depending on whether you are using
a -mattr flag in clang, or llc, opt.

llvm-svn: 353899
2019-02-12 23:30:11 +00:00
Jonas Paulsson 749dc51e45 [SystemZ] Remember to cast value to void to disable warning.
Hopefully fixes buildbot problems.

llvm-svn: 353898
2019-02-12 23:13:18 +00:00
Konstantin Zhuravlyov 6220d62e5c AMDGPU/NFC: Remove SubtargetFeatureISAVersion since it is not used anywhere
llvm-svn: 353892
2019-02-12 22:49:49 +00:00
Konstantin Zhuravlyov acb231c8d8 AMDGPU: Remove duplicate processor (gfx900)
llvm-svn: 353889
2019-02-12 22:29:25 +00:00
Sean Fertile 9850a48275 Fix undefined behaviour in PPCInstPrinter::printBranchOperand.
Fix the undefined behaviour introduced by my previous patch r353865 (left
shifting a potentially negative value), which was caught by the bots that run
UBSan.

llvm-svn: 353874
2019-02-12 20:03:04 +00:00
Nikita Popov a3be17ea1c [AArch64] Expand v8i8 cttz (PR39729)
Fix for https://bugs.llvm.org/show_bug.cgi?id=39729.

Rather than adding just a case for v8i8 I'm setting cttz to expand
for all vector types.

Differential Revision: https://reviews.llvm.org/D58008

llvm-svn: 353872
2019-02-12 18:55:53 +00:00
Jonas Paulsson 34bead750c [SystemZ] Use VGM whenever possible to load FP immediates.
isFPImmLegal() has been extended to recognize certain FP immediates that can
be built with VGM (Vector Generate Mask).

These scalar FP immediates (that were previously loaded from the constant
pool) are now selected as VGMF/VGMG in Select().

Review: Ulrich Weigand
https://reviews.llvm.org/D58003

llvm-svn: 353867
2019-02-12 18:06:06 +00:00
Sean Fertile c069452027 [PowerPC] Fix printing of negative offsets in call instruction dissasembly.
llvm-svn: 353865
2019-02-12 17:48:22 +00:00
Jessica Paquette 0e71e73faa [GlobalISel][AArch64] Select llvm.bswap* for non-vector types
This teaches the IRTranslator to emit G_BSWAP when it runs into
Intrinsic::bswap. This allows us to select G_BSWAP for non-vector types in
AArch64.

Add a select-bswap.mir test, and add global isel checks to a couple existing
tests in test/CodeGen/AArch64.

This doesn't handle every bswap case, since some of these rely on known bits
stuff. This just lets us handle the naive case.

Differential Revision: https://reviews.llvm.org/D58081

llvm-svn: 353861
2019-02-12 17:28:17 +00:00
Simon Pilgrim 5338f41ced [X86][AVX] Enable shuffle combining support for zero_extend
A more limited version of rL352997 that had to be disabled in rL353198 - allow extension of any 128/256/512 bit vector that at least uses byte sized scalars.

llvm-svn: 353860
2019-02-12 17:22:35 +00:00
Matt Arsenault 00ccd13c73 AMDGPU/GlobalISel: Only make f16 constants legal on f16 targets
We could deal with it, but there's no real point.

llvm-svn: 353845
2019-02-12 14:54:55 +00:00
Craig Topper 7670ede434 [X86] Collapse FP_TO_INT16_IN_MEM/FP_TO_INT32_IN_MEM/FP_TO_INT64_IN_MEM into a single opcode using memory VT to distinquish. NFC
llvm-svn: 353798
2019-02-12 06:14:18 +00:00
Craig Topper d7303ecd0b [X86] Remove the value type operand from the floating point load/store MemIntrinsicSDNodes. Use the MemoryVT instead. NFCI
We already have the memory VT, we can just match from that during isel.

llvm-svn: 353797
2019-02-12 06:14:16 +00:00
Matt Arsenault 18ec382698 GlobalISel: Implement moreElementsVector for implicit_def
llvm-svn: 353754
2019-02-11 22:00:39 +00:00
Craig Topper 75eb0af874 [X86] Correct the memory operand for the FLD emitted in FP_TO_INTHelper for 32-bit SSE targets.
We were using DstTy, but that represents the integer type we are converting to which is i64 in this
case. The FLD is part of an intermediate step to get from the SSE registers to the x87 registers.
If the floating point type is f32, the memory operand should reflect a 4 byte access not an 8 byte
access. The store we used to get from SSE to the stack is using the corect size.

While there, consistenly use TheVT in place of Op.getOperand(0).getValueType() throughout the function.

llvm-svn: 353745
2019-02-11 20:38:10 +00:00
Jessica Paquette e57fe23f70 [AArch64][GlobalISel] Add isel support for a couple vector exts/truncs
Add support for

- v4s16 <-> v4s32
- v2s64 <-> v2s32

And update tests that use them to show that we generate the correct
instructions.

Differential Revision: https://reviews.llvm.org/D57832

llvm-svn: 353732
2019-02-11 18:56:39 +00:00
Roland Froese 732fe22454 [PowerPC] Avoid scalarization of vector truncate
The PowerPC code generator currently scalarizes vector truncates that would fit in a vector register, resulting in vector extracts, scalar operations, and vector merges. This patch custom lowers a vector truncate that would fit in a register to a vector shuffle instead.

Differential Revision: https://reviews.llvm.org/D56507

llvm-svn: 353724
2019-02-11 17:29:14 +00:00
Jessica Paquette ebdb021031 [GlobalISel][AArch64] Select G_FFLOOR
This teaches the legalizer about G_FFLOOR, and lets us select G_FFLOOR in
AArch64.

It updates the existing floating point tests, and adds a select-floor.mir test.

Differential Revision: https://reviews.llvm.org/D57486

llvm-svn: 353722
2019-02-11 17:22:58 +00:00
Matt Arsenault 9dba67f431 GlobalISel: Add G_FCANONICALIZE instruction
llvm-svn: 353719
2019-02-11 17:05:20 +00:00
Benjamin Kramer 711950c116 Move some classes into anonymous namespaces. NFC.
llvm-svn: 353710
2019-02-11 15:16:21 +00:00
Benjamin Kramer 582c16013d [AMDGPU] Remove unused variable
llvm-svn: 353704
2019-02-11 14:49:54 +00:00
Neil Henning 8c10fa1a90 [AMDGPU] Fix DPP sequence in atomic optimizer.
This commit fixes the DPP sequence in the atomic optimizer (which was
previously missing the row_shr:3 step), and works around a read_register
exec bug by using a ballot instead.

Differential Revision: https://reviews.llvm.org/D57737

llvm-svn: 353703
2019-02-11 14:44:14 +00:00
Sam McCall e825ba9165 Revert "[X86][SSE] Generalize X86ISD::BLENDI support to more value types"
This reverts commit r353610.
It causes a miscompile visible in macro expansion in a bootstrapped clang.

http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20190211/626590.html

llvm-svn: 353699
2019-02-11 14:05:36 +00:00
Sam Parker 8ff143033a [ARM] Add v8m.base pattern for add negative imm
The v8m.base ISA contains movw, which can operate on an unsigned
16-bit value. Add the pattern that converts an add with a negative
value, that could fit into 16-bits when negated, into a sub with that
positive value.

Differential Revision: https://reviews.llvm.org/D57942

llvm-svn: 353692
2019-02-11 11:35:42 +00:00
Valery Pykhtin ded96df01e [AMDGPU] Enable DPP combiner pass by default.
Related revisions: https://reviews.llvm.org/D55444, https://reviews.llvm.org/D55314

llvm-svn: 353691
2019-02-11 11:15:03 +00:00
Sjoerd Meijer 150ccb889e [ARM] LoadStoreOptimizer: reoder limit
The whole design of generating LDMs/STMs is fragile and unreliable: it depends on
rescheduling here in the LoadStoreOptimizer that isn't register pressure aware
and regalloc that isn't aware of generating LDMs/STMs.
This patch adds a (hidden) option to control the total number of instructions that
can be re-ordered. I appreciate this looks only a tiny bit better than a hard-coded
constant, but at least it allows more easy experimentation with different values
for now. Ideally we calculate this reorder limit based on some heuristics, and take
register pressure into account. I might be looking into that next.

Differential Revision: https://reviews.llvm.org/D57954

llvm-svn: 353678
2019-02-11 09:37:42 +00:00
Sjoerd Meijer 0cc50c6b87 [ARM] LoadStoreOptimizer: just a clean-up. NFC.
Differential Revision: https://reviews.llvm.org/D57955

llvm-svn: 353670
2019-02-11 08:47:59 +00:00
Craig Topper 5b1beda001 [X86] Removed unused SDTypeProfile. NFC
llvm-svn: 353659
2019-02-11 07:30:48 +00:00
Simon Pilgrim f6e6c369c0 [X86] EltsFromConsecutiveLoads - replace SmallBitVector with APInt (NFC).
Minor refactor to simplify some incoming patches to improve broadcast loads.

llvm-svn: 353655
2019-02-10 22:45:48 +00:00
Sanjay Patel 833550fc74 [x86] narrow 256-bit horizontal ops via demanded elements
256-bit horizontal math ops are an x86 monstrosity (and thankfully have
not been extended to 512-bit AFAIK).

The two 128-bit halves operate on separate halves of the inputs. So if we
don't demand anything in the upper half of the result, we can extract the
low halves of the inputs, do the math, and then insert that result into a
256-bit output.

All of the extract/insert is free (ymm<-->xmm), so we're left with a
narrower (cheaper) version of the original op.

In the affected tests based on:
https://bugs.llvm.org/show_bug.cgi?id=33758
https://bugs.llvm.org/show_bug.cgi?id=38971
...we see that the h-op narrowing can result in further narrowing of other
math via existing generic transforms.

I originally drafted this patch as an exact pattern match starting from
extract_vector_elt, but I thought we might see diffs starting from
extract_subvector too, so I changed it to a more general demanded elements
solution. There are no extra existing regression test improvements from
that switch though, so we could go back.

Differential Revision: https://reviews.llvm.org/D57841

llvm-svn: 353641
2019-02-10 15:22:06 +00:00
Craig Topper f37ea96922 [X86] Move some vector InstAliases out from under unnecessary 'let Predicates'. NFCI
We don't have any assembler predicates for vector ISAs so this isn't necessary. It just adds extra lines and identation.

llvm-svn: 353631
2019-02-10 02:34:31 +00:00
Simon Pilgrim 6bf7b30b10 [X86] CombineOr - fold to generic funnel shifts
As discussed on D57389, this is a first step towards moving the SHLD/SHRD matching code to DAGCombiner using FSHL/FSHR instead.

There's a bit of work to do before I can do that, so this just folds to FSHL/FSHR in the existing code (handling the different SHRD/FSHR argument ordering), which fixes the issue we had with i16 shift amounts not being correctly masked.

llvm-svn: 353626
2019-02-09 20:34:59 +00:00
Simon Pilgrim 690a2889d8 [X86][SSE] Generalize X86ISD::BLENDI support to more value types
D42042 introduced the ability for the ExecutionDomainFixPass to more easily change between BLENDPD/BLENDPS/PBLENDW as the domains required.

With this ability, we can avoid most bitcasts/scaling in the DAG that was occurring with X86ISD::BLENDI lowering/combining, blend with the vXi32/vXi64 vectors directly and use isel patterns to lower to the float vector equivalent vectors.

This helps the shuffle combining and SimplifyDemandedVectorElts be more aggressive as we lose track of fewer UNDEF elements than when we go up/down through bitcasts.

I've introduced a basic blend(bitcast(x),bitcast(y)) -> bitcast(blend(x,y)) fold, there are more generalizations I can do there (e.g. widening/scaling and handling the tricky v16i16 repeated mask case).

The vector-reduce-smin/smax regressions will be fixed in a future improvement to SimplifyDemandedBits to peek through bitcasts and support X86ISD::BLENDV.

Differential Revision: https://reviews.llvm.org/D57888

llvm-svn: 353610
2019-02-09 13:13:59 +00:00
Stanislav Mekhanoshin 0e858b028d [AMDGPU] Split dot-insts feature
Differential Revision: https://reviews.llvm.org/D57971

llvm-svn: 353587
2019-02-09 00:34:21 +00:00
Craig Topper fcb63c4c6c [X86] Add FPCW as an implicit use on floating point load instructions.
These instructions can generate a stack overflow exception so technically they read the stack overflow exception mask bit.

llvm-svn: 353564
2019-02-08 20:50:09 +00:00
Craig Topper 784929d045 Implementation of asm-goto support in LLVM
This patch accompanies the RFC posted here:
http://lists.llvm.org/pipermail/llvm-dev/2018-October/127239.html

This patch adds a new CallBr IR instruction to support asm-goto
inline assembly like gcc as used by the linux kernel. This
instruction is both a call instruction and a terminator
instruction with multiple successors. Only inline assembly
usage is supported today.

This also adds a new INLINEASM_BR opcode to SelectionDAG and
MachineIR to represent an INLINEASM block that is also
considered a terminator instruction.

There will likely be more bug fixes and optimizations to follow
this, but we felt it had reached a point where we would like to
switch to an incremental development model.

Patch by Craig Topper, Alexander Ivchenko, Mikhail Dvoretckii

Differential Revision: https://reviews.llvm.org/D53765

llvm-svn: 353563
2019-02-08 20:48:56 +00:00
Matt Arsenault 564f0f832c AMDGPU: Eliminate GPU specific SubtargetFeatures
Inline compatability is determined from the individual feature
bits. These are just sets of the separate features, but will always be
treated as incompatible unless they are specifically ignored.

Defining the ISA version number here in tablegen would be nice, but it
turns out this wasn't actually used.

llvm-svn: 353558
2019-02-08 19:59:32 +00:00
Matt Arsenault d7047276ec AMDGPU: Remove GCN features and predicates
These are no longer necessary since the R600 tablegen files are split
out now.

llvm-svn: 353548
2019-02-08 19:18:01 +00:00
Craig Topper 41a1792b15 [X86] Remove isReMaterializable from X87 floating point constant loads and constant pool loads.
Summary: These instructions update FPSW so they aren't generically safe to rematerialize into any location if FPSW is live for a comparison result. They also use FPCW for exception masking control. Though the only exception they can generate is stack overflow and we manage the stack ourselves so that's not really going to occur.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D57934

llvm-svn: 353536
2019-02-08 17:07:54 +00:00
Sanjay Patel e9cc26a56a [x86] fix formatting; NFC
(test commit #2 migrating to git)

llvm-svn: 353533
2019-02-08 16:48:40 +00:00
Carl Ritson 494b8ac95a [AMDGPU] Fix CS scratch setup on pre-GCN3 ASICs
Summary:
Prior to GCN3 s_load_dword offsets are in dwords rather than bytes.
Thus the scratch buffer descriptor offset must be adjusted for pre-GCN3 ASICs.

Reviewers: nhaehnle, tpr

Reviewed By: nhaehnle

Subscribers: sheredom, arsenm, kzhuravl, jvesely, wdng, yaxunl, dstuttard, t-tye, jfb, llvm-commits

Differential Revision: https://reviews.llvm.org/D56496

llvm-svn: 353530
2019-02-08 15:41:11 +00:00
Matt Arsenault b0a227049f AMDGPU/GlobalISel: Fix shift legalization for non-power-of-2
clampScalar doesn't do anything for non-power-of-2 in range.
There should probably be a combination rule to reduce the number
of matching rules.

llvm-svn: 353526
2019-02-08 15:06:24 +00:00
Dmitry Preobrazhensky 942c273d64 [AMDGPU][MC] Added support of lds_direct operand
See bug 39293: https://bugs.llvm.org/show_bug.cgi?id=39293

Reviewers: artem.tamazov, rampitec

Differential Revision: https://reviews.llvm.org/D57889

llvm-svn: 353524
2019-02-08 14:57:37 +00:00
Matt Arsenault 0f2debb1c2 AMDGPU/GlobalISel: Fix non-power-of-2 implicit_def
llvm-svn: 353522
2019-02-08 14:46:27 +00:00