Commit Graph

27051 Commits

Author SHA1 Message Date
Matt Arsenault dff33c38e1 AMDGPU/GlobalISel: RegBankSelect for fp conversions
llvm-svn: 349709
2018-12-20 00:37:02 +00:00
Matt Arsenault 36d4092173 AMDGPU/GlobalISel: Legality/regbankselect for atomicrmw/atomic_cmpxchg
llvm-svn: 349708
2018-12-20 00:33:49 +00:00
Rhys Perry 3931ad38b9 AMDGPU: Add patterns for v4i16/v4f16 -> v4i16/v4f16 bitcasts
Reviewers: arsenm, tstellar

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D55058

llvm-svn: 349694
2018-12-19 22:53:33 +00:00
Sanjay Patel ca6434de37 [x86] add test to show ddup hole; NFC (PR37502)
llvm-svn: 349680
2018-12-19 20:35:28 +00:00
Jessica Paquette 3560e93dc1 [GlobalISel][AArch64] Add support for @llvm.ceil
This adds a G_FCEIL generic instruction and uses it in AArch64. This adds
selection for floating point ceil where it has a supported, dedicated
instruction. Other cases aren't handled here.

It updates the relevant gisel tests and adds a select-ceil test. It also adds a
check to arm64-vcvt.ll which ensures that we don't fall back when we run into
one of the relevant cases.

llvm-svn: 349664
2018-12-19 19:01:36 +00:00
Craig Topper 84a00bd98a [X86] Don't match TESTrr from (cmp (and X, Y), 0) during isel. Defer to post processing
The (cmp (and X, Y) 0) pattern is greedy and ends up forming a TESTrr and consuming the and when it might be better to use one of the BMI/TBM like BLSR or BLSI.

This patch moves removes the pattern from isel and adds a post processing check to combine TESTrr+ANDrr into just a TESTrr. With this patch we are able to select the BMI/TBM instructions, but we'll also emit a TESTrr when the result is compared to 0. In many cases the peephole pass will be able to use optimizeCompareInstr to remove the TEST, but its probably not perfect.

Differential Revision: https://reviews.llvm.org/D55870

llvm-svn: 349661
2018-12-19 18:49:13 +00:00
Craig Topper 291470347a [X86] Fix assert fails in pass X86AvoidSFBPass
Fixes https://bugs.llvm.org/show_bug.cgi?id=38743

The function removeRedundantBlockingStores is supposed to remove any blocking stores contained in each other in lockingStoresDispSizeMap.
But it currently looks only at the previous one, which will miss some cases that result in assert.

This patch refine the function to check all previous layouts until find the uncontained one. So all redundant stores will be removed.

Patch by Pengfei Wang

Differential Revision: https://reviews.llvm.org/D55642

llvm-svn: 349660
2018-12-19 18:45:57 +00:00
Simon Pilgrim 171f3aa012 [X86] Remove already upgraded llvm.x86.avx512.mask.padds/psubs tests
Duplicate tests have already been moved to avx512bw-intrinsics-upgrade.ll

llvm-svn: 349643
2018-12-19 17:18:27 +00:00
Yonghong Song 7b410ac352 [BPF] Generate BTF DebugInfo under BPF target
This patch implements BTF (BPF Type Format).
The BTF is the debug info format for BPF, introduced
in the below linux patch:
  69b693f0ae (diff-06fb1c8825f653d7e539058b72c83332)
and further extended several times, e.g.,
  https://www.spinics.net/lists/netdev/msg534640.html
  https://www.spinics.net/lists/netdev/msg538464.html
  https://www.spinics.net/lists/netdev/msg540246.html

The main advantage of implementing in LLVM is:
   . better integration/deployment as no extra tools are needed.
   . bpf JIT based compilation (like bcc, bpftrace, etc.) can get
     BTF without much extra effort.
   . BTF line_info needs selective source codes, which can be
     easily retrieved when inside the compiler.

This patch implemented BTF generation by registering a BPF
specific DebugHandler in BPFAsmPrinter.

Signed-off-by: Yonghong Song <yhs@fb.com>

Differential Revision: https://reviews.llvm.org/D55752

llvm-svn: 349640
2018-12-19 16:40:25 +00:00
Amy Kwan 22cd453ba7 Test commit
llvm-svn: 349633
2018-12-19 15:21:07 +00:00
Simon Pilgrim 7bfbf3caa4 [X86][SSE] Auto upgrade PADDUS/PSUBUS intrinsics to UADD_SAT/USUB_SAT generic intrinsics (llvm)
Now that we use the generic ISD opcodes, we can use the generic intrinsics directly as well. This fixes the poor fast-isel codegen by not expanding to an easily broken IR code sequence.

I'm intending to deal with the signed saturation equivalents as well.

Clang counterpart: https://reviews.llvm.org/D55879

Differential Revision: https://reviews.llvm.org/D55855

llvm-svn: 349630
2018-12-19 14:43:36 +00:00
Simon Pilgrim 2ae3a91656 [SelectionDAG] Optional handling of UNDEF elements in matchBinaryPredicate (part 2 of 2)
Now that SimplifyDemandedBits/SimplifyDemandedVectorElts is simplifying vector elements, we're seeing more constant BUILD_VECTOR containing undefs.

This patch provides opt-in support for UNDEF elements in matchBinaryPredicate, passing NULL instead of the result ConstantSDNode* argument.

I've updated the (or (and X, c1), c2) -> (and (or X, c2), c1|c2) fold to demonstrate its use, which I believe is safe for undef cases.

Differential Revision: https://reviews.llvm.org/D55822

llvm-svn: 349629
2018-12-19 14:09:38 +00:00
Simon Pilgrim 6c95bea072 [TargetLowering] Fix propagation of undefs in zero extension ops (PR40091)
As described on PR40091, we have several places where zext (and zext_vector_inreg) fold an undef input into an undef output. For zero extensions this is incorrect as the output should guarantee to least have the new upper bits set to zero.

SimplifyDemandedVectorElts is the worst offender (and its the most likely to cause new undefs to appear) but DAGCombiner's tryToFoldExtendOfConstant has a similar issue.

Thanks to @dmgreen for catching this.

Differential Revision: https://reviews.llvm.org/D55883

llvm-svn: 349625
2018-12-19 13:37:59 +00:00
Simon Pilgrim ac62c8a3aa [X86][SSE] Remove use of SSE ADDS/SUBS saturation intrinsics from schedule/stack tests
These are due to be upgraded soon, but good to replace them with generic llvm sadd_sat/ssub_sat intrinsics now.

The avx512 masked cases need doing as well but require a bit of tidyup first.

llvm-svn: 349621
2018-12-19 12:00:25 +00:00
Nicolai Haehnle 8d5e974076 AMDGPU: Use an ABS32_LO relocation for SCRATCH_RSRC_DWORD1
Summary:
Using HI here makes no logical sense, since the dword is only
32 bits to begin with.

Current Mesa master does not look at the relocation type at all,
so this change is fine. Future Mesa will rely on this, however.

Change-Id: I91085707834c4ac0370926602b93c94b90e44cb1

Reviewers: arsenm, rampitec, mareko

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D55369

llvm-svn: 349620
2018-12-19 11:55:03 +00:00
Simon Pilgrim 2072b5afbe [SelectionDAG] Optional handling of UNDEF elements in matchUnaryPredicate
Now that SimplifyDemandedBits/SimplifyDemandedVectorElts are simplifying vector elements, we're seeing more constant BUILD_VECTOR containing UNDEFs.

This patch provides opt-in handling of UNDEF elements in matchUnaryPredicate, passing NULL instead of the ConstantSDNode* argument.

I've updated SelectionDAG::simplifyShift to demonstrate its use.

Differential Revision: https://reviews.llvm.org/D55819

llvm-svn: 349616
2018-12-19 10:41:06 +00:00
Simon Pilgrim d4b077698a [X86][SSE] Remove SSE ADDUS/SUBUS saturation intrinsics from schedule/stack tests
These are already being autoupgraded, currently to an IR sequence, but best to replace them with generic llvm uadd_sat/usub_sat intrinsics (which D55855 will be doing shortly anyhow).

The avx512 masked cases need doing as well but require a bit of tidyup first.

llvm-svn: 349615
2018-12-19 10:39:14 +00:00
Carl Ritson c521ac3a44 AMDGPU/InsertWaitcnts: Update VGPR/SGPR bounds when brackets are merged
Summary:
Fix an issue where VGPR/SGPR bounds are not properly extended when brackets are merged.
This manifests as missing waitcnt insertions when multiple brackets are forwarded to a successor block and the first forward has lower VGPR/SGPR bounds.

Irreducible loop test has been extended based on a CTS failure detected for GFX9.

Reviewers: nhaehnle

Reviewed By: nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, jfb, llvm-commits

Differential Revision: https://reviews.llvm.org/D55602

llvm-svn: 349611
2018-12-19 10:17:49 +00:00
Diana Picus 6c35a1e5af [ARM GlobalISel] Support G_CONSTANT for Thumb2
All we have to do is mark it as legal.

This allows us to select a lot of new patterns handled by TableGen. This
patch adds tests for them and splits up the existing test file for
binary operators into 2 files, one for arithmetic ops and one for
logical ones.

llvm-svn: 349610
2018-12-19 09:55:10 +00:00
Matt Arsenault b110e2277c AMDGPU/GlobalISel: Regbankselect for fsub
llvm-svn: 349608
2018-12-19 09:07:58 +00:00
Kewen Lin a6247e7cf4 [PowerPC]Exploit P9 vabsdu for unsigned vselect patterns
For type v4i32/v8ii16/v16i8, do following transforms:
  (vselect (setcc a, b, setugt), (sub a, b), (sub b, a)) -> (vabsd a, b)
  (vselect (setcc a, b, setuge), (sub a, b), (sub b, a)) -> (vabsd a, b)
  (vselect (setcc a, b, setult), (sub b, a), (sub a, b)) -> (vabsd a, b)
  (vselect (setcc a, b, setule), (sub b, a), (sub a, b)) -> (vabsd a, b)

Differential Revision: https://reviews.llvm.org/D55812

llvm-svn: 349599
2018-12-19 03:04:07 +00:00
Pete Cooper f86db5ce9e Rewrite objc intrinsics to runtime methods in PreISelIntrinsicLowering instead of SDAG.
SelectionDAG currently changes these intrinsics to function calls, but that won't work
for other ISel's.  Also we want to eventually support nonlazybind and weak linkage coming
from the front-end which we can't do in SelectionDAG.

llvm-svn: 349552
2018-12-18 22:20:03 +00:00
Simon Pilgrim 73c8685295 [AARCH64] Added test case for PR40091
llvm-svn: 349543
2018-12-18 21:05:22 +00:00
Craig Topper 18a9d545e1 [X86] Add BSR to isUseDefConvertible.
We already had BSF here as part of __builtin_ffs improvements and I was just wondering yesterday whether we should have BSR there.

This addresses one issue from PR40090.

llvm-svn: 349531
2018-12-18 20:03:54 +00:00
Farhana Aleen 59ee2c5362 [AMDGPU] Removed the unnecessary operand size-check-assert from processBaseWithConstOffset().
Summary: 32bit operand sizes are guaranteed by the opcode check AMDGPU::V_ADD_I32_e64 and
         AMDGPU::V_ADDC_U32_e64. Therefore, we don't any additional operand size-check-assert.

Author: FarhanaAleen
llvm-svn: 349529
2018-12-18 19:58:39 +00:00
Nikita Popov f6058ff140 [X86] Use SADDSAT/SSUBSAT instead of ADDS/SUBS
Migrate the X86 backend from X86ISD opcodes ADDS and SUBS to generic
ISD opcodes SADDSAT and SSUBSAT. This also improves scodegen for
@llvm.sadd.sat() and @llvm.ssub.sat() intrinsics.

This is a followup to D55787 and part of PR40056.

Differential Revision: https://reviews.llvm.org/D55833

llvm-svn: 349520
2018-12-18 18:28:22 +00:00
Craig Topper 20a6db5a84 [X86] Create PSUBUS from (add (umax X, C), -C)
InstCombine seems to canonicalize or PSUB patter into a max with the cosntant and an add with an inverse of the constant.

This patch recognizes this pattern and turns it into PSUBUS. Future work could improve undef element handling.

Fixes some of PR40053

Differential Revision: https://reviews.llvm.org/D55780

llvm-svn: 349519
2018-12-18 18:26:25 +00:00
Michael Berg c6a5245cf7 Add FMF management to common fp intrinsics in GlobalIsel
Summary: This the initial code change to facilitate managing FMF flags from Instructions to MI wrt Intrinsics in Global Isel.  Eventually the GlobalObserver interface will be added as well, where FMF additions can be tracked for the builder and CSE.

Reviewers: aditya_nandakumar, bogner

Reviewed By: bogner

Subscribers: rovka, kristof.beyls, javed.absar

Differential Revision: https://reviews.llvm.org/D55668

llvm-svn: 349514
2018-12-18 17:54:52 +00:00
Simon Pilgrim 1411917431 [X86][SSE] Don't use 'sign bit select' vXi8 ROTL lowering for constant rotation amounts
Noticed by @spatel on D55747 - we get much better codegen if we use the regular shift expansion.

llvm-svn: 349510
2018-12-18 17:31:11 +00:00
Simon Pilgrim e9effe9744 [X86][SSE] Don't use 'sign bit select' vXi8 ROTL lowering for splat rotation amounts
Noticed by @spatel on D55747 - we get much better codegen if we use the regular shift expansion.

llvm-svn: 349500
2018-12-18 16:02:23 +00:00
Petar Avramovic 0a5e4eb776 [MIPS GlobalISel] Select G_SDIV, G_UDIV, G_SREM and G_UREM
Add support for s64 libcalls for G_SDIV, G_UDIV, G_SREM and G_UREM
and use integer type of correct size when creating arguments for
CLI.lowerCall.
Select G_SDIV, G_UDIV, G_SREM and G_UREM for types s8, s16, s32 and s64
on MIPS32.

Differential Revision: https://reviews.llvm.org/D55651

llvm-svn: 349499
2018-12-18 15:59:51 +00:00
Simon Pilgrim be0fbe673e [X86][SSE] Add shift combine 'out of range' tests with UNDEFs
Shows failure to simplify out of range shift amounts to UNDEF if any element is UNDEF.

llvm-svn: 349483
2018-12-18 13:37:04 +00:00
Nikita Popov 665ab08178 [X86] Use UADDSAT/USUBSAT instead of ADDUS/SUBUS
Replace the X86ISD opcodes ADDUS and SUBUS with generic ISD opcodes
UADDSAT and USUBSAT. As a side-effect, this also makes codegen for
the @llvm.uadd.sat and @llvm.usub.sat intrinsics reasonable.

This only replaces use in the X86 backend, and does not move any of
the ADDUS/SUBUS X86 specific combines into generic codegen.

Differential Revision: https://reviews.llvm.org/D55787

llvm-svn: 349481
2018-12-18 13:23:03 +00:00
Nikita Popov a7d2a235bb [SelectionDAG][X86] Fix [US](ADD|SUB)SAT vector legalization, add tests
Integer result promotion needs to use the scalar size, and we need
support for result widening.

This is in preparation for D55787.

llvm-svn: 349480
2018-12-18 13:22:53 +00:00
Petar Avramovic 150fd430f6 [MIPS GlobalISel] ClampScalar G_AND G_OR and G_XOR
Add narrowScalar for G_AND and G_XOR.
Legalize G_AND G_OR and G_XOR for types other then s32 
with clampScalar on MIPS32.

Differential Revision: https://reviews.llvm.org/D55362

llvm-svn: 349475
2018-12-18 11:36:14 +00:00
Luke Cheeseman f57d7d8237 [AArch64] - Return address signing dwarf support
- Reapply changes intially introduced in r343089
- The archtecture info is no longer loaded whenever a DWARFContext is created
- The runtimes libraries (santiziers) make use of the dwarf context classes but
  do not intialise the target info
- The architecture of the object can be obtained without loading the target info
- Adding a method to the dwarf context to get this information and multiplex the
  string printing later on

Differential Revision: https://reviews.llvm.org/D55774

llvm-svn: 349472
2018-12-18 10:37:42 +00:00
Simon Pilgrim ba8e84b31c [X86][AVX] Add 256/512-bit vector funnel shift tests
Extra coverage for D55747

llvm-svn: 349471
2018-12-18 10:32:54 +00:00
Simon Pilgrim 46b90e851b [X86][SSE] Add 128-bit vector funnel shift tests
Extra coverage for D55747

llvm-svn: 349470
2018-12-18 10:08:23 +00:00
Matt Arsenault c94e26c71d AMDGPU: Legalize/regbankselect frame_index
llvm-svn: 349468
2018-12-18 09:46:13 +00:00
Matt Arsenault c0ea221068 AMDGPU: Legalize/regbankselect fma
llvm-svn: 349467
2018-12-18 09:39:56 +00:00
Simon Pilgrim af6fbbf18b [TargetLowering] Fallback from SimplifyDemandedVectorElts to SimplifyDemandedBits
For opcodes not covered by SimplifyDemandedVectorElts, SimplifyDemandedBits might be able to help now that it supports demanded elts as well.

llvm-svn: 349466
2018-12-18 09:33:25 +00:00
Matt Arsenault e01e7c81f2 AMDGPU/GlobalISel: Legalize/regbankselect fneg/fabs/fsub
llvm-svn: 349463
2018-12-18 09:19:03 +00:00
Simon Pilgrim 26c630f416 [X86][SSE] Replace (VSRLI (VSRAI X, Y), 31) -> (VSRLI X, 31) fold.
This fold was incredibly specific - replace with a SimplifyDemandedBits fold to remove a VSRAI if only the original sign bit is demanded (its guaranteed to stay the same).

Test change is merely a rescheduling.

llvm-svn: 349459
2018-12-18 08:55:47 +00:00
Kristof Beyls e66bc1f756 Introduce control flow speculation tracking pass for AArch64
The pass implements tracking of control flow miss-speculation into a "taint"
register. That taint register can then be used to mask off registers with
sensitive data when executing under miss-speculation, a.k.a. "transient
execution".
This pass is aimed at mitigating against SpectreV1-style vulnarabilities.

At the moment, it implements the tracking of miss-speculation of control
flow into a taint register, but doesn't implement a mechanism yet to then
use that taint register to mask off vulnerable data in registers (something
for a follow-on improvement). Possible strategies to mask out vulnerable
data that can be implemented on top of this are:
- speculative load hardening to automatically mask of data loaded
  in registers.
- using intrinsics to mask of data in registers as indicated by the
  programmer (see https://lwn.net/Articles/759423/).

For AArch64, the following implementation choices are made.
Some of these are different than the implementation choices made in
the similar pass implemented in X86SpeculativeLoadHardening.cpp, as
the instruction set characteristics result in different trade-offs.
- The speculation hardening is done after register allocation. With a
  relative abundance of registers, one register is reserved (X16) to be
  the taint register. X16 is expected to not clash with other register
  reservation mechanisms with very high probability because:
  . The AArch64 ABI doesn't guarantee X16 to be retained across any call.
  . The only way to request X16 to be used as a programmer is through
    inline assembly. In the rare case a function explicitly demands to
    use X16/W16, this pass falls back to hardening against speculation
    by inserting a DSB SYS/ISB barrier pair which will prevent control
    flow speculation.
- It is easy to insert mask operations at this late stage as we have
  mask operations available that don't set flags.
- The taint variable contains all-ones when no miss-speculation is detected,
  and contains all-zeros when miss-speculation is detected. Therefore, when
  masking, an AND instruction (which only changes the register to be masked,
  no other side effects) can easily be inserted anywhere that's needed.
- The tracking of miss-speculation is done by using a data-flow conditional
  select instruction (CSEL) to evaluate the flags that were also used to
  make conditional branch direction decisions. Speculation of the CSEL
  instruction can be limited with a CSDB instruction - so the combination of
  CSEL + a later CSDB gives the guarantee that the flags as used in the CSEL
  aren't speculated. When conditional branch direction gets miss-speculated,
  the semantics of the inserted CSEL instruction is such that the taint
  register will contain all zero bits.
  One key requirement for this to work is that the conditional branch is
  followed by an execution of the CSEL instruction, where the CSEL
  instruction needs to use the same flags status as the conditional branch.
  This means that the conditional branches must not be implemented as one
  of the AArch64 conditional branches that do not use the flags as input
  (CB(N)Z and TB(N)Z). This is implemented by ensuring in the instruction
  selectors to not produce these instructions when speculation hardening
  is enabled. This pass will assert if it does encounter such an instruction.
- On function call boundaries, the miss-speculation state is transferred from
  the taint register X16 to be encoded in the SP register as value 0.

Future extensions/improvements could be:
- Implement this functionality using full speculation barriers, akin to the
  x86-slh-lfence option. This may be more useful for the intrinsics-based
  approach than for the SLH approach to masking.
  Note that this pass already inserts the full speculation barriers if the
  function for some niche reason makes use of X16/W16.
- no indirect branch misprediction gets protected/instrumented; but this
  could be done for some indirect branches, such as switch jump tables.

Differential Revision: https://reviews.llvm.org/D54896

llvm-svn: 349456
2018-12-18 08:50:02 +00:00
Martin Storsjo 8f0cb9c3a8 [AArch64] [MinGW] Allow enabling SEH exceptions
The default still is dwarf, but SEH exceptions can now be enabled
optionally for the MinGW target.

Differential Revision: https://reviews.llvm.org/D55748

llvm-svn: 349451
2018-12-18 08:32:37 +00:00
Craig Topper 284d426f6d [X86] Add test cases to show isel failing to match BMI blsmsk/blsi/blsr when the flag result is used.
A similar things happen to TBM instructions which we already have tests for.

llvm-svn: 349450
2018-12-18 08:26:01 +00:00
Kewen Lin bbb461f758 [PowerPC][NFC]Update vabsd cases with vselect test cases
Power9 VABSDU* instructions can be exploited for some special vselect sequences.
Check in the orignal test case here, later the exploitation patch will update this 
and reviewers can check the differences easily.

llvm-svn: 349446
2018-12-18 08:11:32 +00:00
Kewen Lin 44ace92596 [PowerPC] Exploit power9 new instruction setb
Check the expected pattens feeding to SELECT_CC like:
   (select_cc lhs, rhs,  1, (sext (setcc [lr]hs, [lr]hs, cc2)), cc1)
   (select_cc lhs, rhs, -1, (zext (setcc [lr]hs, [lr]hs, cc2)), cc1)
   (select_cc lhs, rhs,  0, (select_cc [lr]hs, [lr]hs,  1, -1, cc2), seteq)
   (select_cc lhs, rhs,  0, (select_cc [lr]hs, [lr]hs, -1,  1, cc2), seteq)
Further transform the sequence to comparison + setb if hits.

Differential Revision: https://reviews.llvm.org/D53275

llvm-svn: 349445
2018-12-18 07:53:26 +00:00
QingShan Zhang ecdab5bdd8 [NFC] Add new test to cover the lhs scheduling issue for P9.
llvm-svn: 349443
2018-12-18 06:32:42 +00:00
Craig Topper 4adf9ca738 [X86] Add test case for PR40060. NFC
llvm-svn: 349441
2018-12-18 04:58:07 +00:00
QingShan Zhang f549812599 [NFC] fix test case issue that with wrong label check.
llvm-svn: 349439
2018-12-18 04:25:41 +00:00
Kewen Lin 3dac1252da [PowerPC] Improve vec_abs on P9
Improve the current vec_abs support on P9, generate ISD::ABS node for vector types,
combine ABS node to VABSD node for some special cases to make use of P9 VABSD* insns,
do custom lowering to vsub(vneg later)+vmax if it has no combination opportunity.

Differential Revision: https://reviews.llvm.org/D54783

llvm-svn: 349437
2018-12-18 03:16:43 +00:00
Craig Topper 6a6f6109c4 [X86] Add baseline tests for D55780
This adds tests for (add (umax X, C), -C) as part of fixing PR40053

llvm-svn: 349416
2018-12-17 23:20:14 +00:00
Simon Pilgrim 7e2975a44c [X86][SSE] Improve immediate vector shift known bits handling.
Convert VSRAI to VSRLI is the sign bit is known zero and improve KnownBits output for all shift instruction.

Fixes the poor codegen comments in D55768.

llvm-svn: 349407
2018-12-17 22:09:47 +00:00
Wouter van Oortmerssen d3c544aa6e [WebAssembly] Fix assembler parsing of br_table.
Summary:
We use `variable_ops` in the tablegen defs to denote the list of
branch targets in `br_table`, but unlike other uses of `variable_ops`
(e.g. call) the these branch targets need to actually be encoded in the
instruction. The existing tables for `variable_ops` cause not operands
to be accepted by the assembly matcher.

Following the example of ARM:
2cc0a7da87/lib/Target/ARM/ARMInstrInfo.td (L550-L555)
we introduce a new operand type to capture this list, and we use the
same {} syntax as ARM as well to differentiate them from regular
integer operands.

Also removed definition and use of TSFlags in tablegen defs, since
`br_table` now has a non-variable_ops immediate operand, so the
previous logic of only the variable_ops arguments being labels didn't
make sense anymore.

Reviewers: dschuff, aheejin, sunfish

Subscribers: javed.absar, sbc100, jgravelle-google, kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D55401

llvm-svn: 349405
2018-12-17 22:04:44 +00:00
Craig Topper 8c9d772991 [X86] Add T1MSKC and TZMSK to isDefConvertible used by optimizeCompareInstr.
These seem to have been missed when the other TBM instructions were added.

llvm-svn: 349404
2018-12-17 21:50:06 +00:00
Craig Topper 728cbc0378 Convert (CMP (srl/shl X, C), 0) to (CMP (and X, C'), 0) when only the zero flag is used.
This allows a TEST to be used and can be combined with any AND that may already exist as an input to the shift.

This was already done in EmitTest, but was easily tricked by multiple uses because the setcc might be used by multiple instructions. Once the SETCC and users are legalized then we can look for the shift to be used by a single CMP, but the CMP itself can have multiple users.

This appears to fix the case in PR39968.

llvm-svn: 349385
2018-12-17 20:02:16 +00:00
Simon Pilgrim 9274f17a5e [TargetLowering] Add DemandedElts mask to SimplifyDemandedBits (PR40000)
This is an initial patch to add the necessary support for a DemandedElts argument to SimplifyDemandedBits, more closely matching computeKnownBits and to help improve vector codegen.

I've added only a small amount of the changes necessary to get at least one test to update - a lot more can be done but I'd like to add these methodically with proper test coverage, at the same time the hope is to slowly move some/all of SimplifyDemandedVectorElts into SimplifyDemandedBits as well.

Differential Revision: https://reviews.llvm.org/D55768

llvm-svn: 349374
2018-12-17 18:43:43 +00:00
Tim Northover 256a16d031 FastIsel: take care to update iterators when removing instructions.
We keep a few iterators into the basic block we're selecting while
performing FastISel. Usually this is fine, but occasionally code wants
to remove already-emitted instructions. When this happens we have to be
careful to update those iterators so they're not pointint at dangling
memory.

llvm-svn: 349365
2018-12-17 17:25:53 +00:00
Tim Northover ae3b66b7b0 ARM: use acquire/release instruction variants when available.
These features (fairly) recently got split out into their own feature, so we
should make CodeGen use them when available. The main change here is that the
check used to be based on the triple, but now it's based on CPU features.

llvm-svn: 349355
2018-12-17 15:05:32 +00:00
Simon Pilgrim 193429ea15 Regenerate test in prep for SimplifyDemandedBits improvements.
llvm-svn: 349350
2018-12-17 12:48:34 +00:00
Petar Avramovic b8276f2280 [MIPS GlobalISel] Lower G_UADDE and narrowScalar G_ADD
Lower G_UADDE and legalize G_ADD using narrowScalar on MIPS32.

Differential Revision: https://reviews.llvm.org/D54580

llvm-svn: 349346
2018-12-17 12:31:07 +00:00
Alexandros Lamprineas 490ae11717 [AArch64] Re-run load/store optimizer after aggressive tail duplication
The Load/Store Optimizer runs before Machine Block Placement. At O3 the
Tail Duplication Threshold is set to 4 instructions and this can create
new opportunities for the Load/Store Optimizer. It seems worthwhile to
run it once again.

llvm-svn: 349338
2018-12-17 10:45:43 +00:00
Craig Topper 792d4f130d [X86] Add test case for PR39968. NFC
llvm-svn: 349331
2018-12-17 07:51:17 +00:00
Kewen Lin c68ce89ae1 [Power9][NFC]update vabsd case for better dumping
Appended options -ppc-vsr-nums-as-vr and -ppc-asm-full-reg-names to get the 
more descriptive output. Also removed useless function attributes.

llvm-svn: 349329
2018-12-17 06:32:02 +00:00
Kewen Lin 3ee103085e [Power9][NFC]Make pre-inc-disable case more robust
With some patch adopted for Power9 vabsd* insns, some CHECKs can't get the expected results.
But it's false alarm, we should update the case more robust.

llvm-svn: 349325
2018-12-17 03:16:12 +00:00
Simon Pilgrim 0dea14f2f2 Regenerate test (merges X86+X64 cases). NFCI.
llvm-svn: 349317
2018-12-16 19:07:57 +00:00
Craig Topper 10f8892837 [X86] Remove truncation handling from EmitTest. Replace it with a DAG combine.
I'd like to try to move a lot of the flag matching out of EmitTest and push it to isel or isel preprocessing. This is a step towards that.

The test-shrink-bug.ll changie is an improvement because we are no longer interfering with test shrink handling in isel.

The pr34137.ll change is a regression, but the IR came from -O0 and was not reduced by InstCombine. So it contains a lot of redundancies like duplicate loads that made it combine poorly.

llvm-svn: 349315
2018-12-16 18:35:55 +00:00
Craig Topper b0b9c54578 [X86] Autogenerate complete checks. NFC
llvm-svn: 349314
2018-12-16 18:35:54 +00:00
Sanjay Patel 13ac2f15b0 [x86] increment/decrement constant vector with min/max in vsetcc lowering (PR39859)
This is part of fixing PR39859:
https://bugs.llvm.org/show_bug.cgi?id=39859

We have a crippled vector ISA, so we have to invert a typical fold and create min/max here.

As discussed in the bug report, we can probably do better by using saturating subtract when 
it's available, but we should have this improvement for the min/max patterns regardless.

Alive proofs:
https://rise4fun.com/Alive/zsf
https://rise4fun.com/Alive/Qrl

Differential Revision: https://reviews.llvm.org/D55515

llvm-svn: 349304
2018-12-16 15:05:48 +00:00
Sanjay Patel f24900b934 [DAGCombiner] allow hoisting vector bitwise logic ahead of truncates
The transform performs a bitwise logic op in a wider type followed by
truncate when both inputs are truncated from the same source type:
logic_op (truncate x), (truncate y) --> truncate (logic_op x, y)

There are a bunch of other checks that should prevent doing this when 
it might be harmful.

We already do this transform for scalars in this spot. The vector 
limitation was shared with a check for the case when the operands are 
extended. I'm not sure if that limit is needed either, but that would 
be a separate patch.

Differential Revision: https://reviews.llvm.org/D55448

llvm-svn: 349303
2018-12-16 14:57:04 +00:00
Simon Pilgrim 0ef977b83d [SelectionDAG] Add FSHL/FSHR support to computeKnownBits
Also exposes an issue in DAGCombiner::visitFunnelShift where we were assuming the shift amount had the result type (after legalization it'll have the targets shift amount type).

llvm-svn: 349298
2018-12-16 13:33:37 +00:00
Simon Pilgrim 780b3ca775 [X86] Add computeKnownBits tests for funnel shift intrinsics
llvm-svn: 349297
2018-12-16 12:15:31 +00:00
Craig Topper 392edb6223 [X86] Autogenerate complete checks. NFC
llvm-svn: 349287
2018-12-15 22:52:57 +00:00
Simon Pilgrim ef7b5949e5 [X86] Lower to SHLD/SHRD on slow machines for optsize
Use consistent rules for when to lower to SHLD/SHRD for slow machines - fixes a weird issue where funnel shift gets expanded but then X86ISelLowering's combineOr sees the optsize and combines to SHLD/SHRD, but now with the modulo amount guard......

llvm-svn: 349285
2018-12-15 19:43:44 +00:00
Simon Pilgrim 53c8b1b6f7 [X86] Add optsize SHLD/SHRD tests
llvm-svn: 349284
2018-12-15 19:32:26 +00:00
Dinar Temirbulatov 8c8724dd0d [CodeGen] Enhance machine PHIs optimization
Summary:
Make machine PHIs optimization to work for single value register taken from
several different copies. This is the first step to fix PR38917. This change
allows to get rid of redundant PHIs (see opt_phis2.mir test) to make
the subsequent optimizations (like CSE) possible and simpler.

For instance, before this patch the code like this:

%b = COPY %z
...
%a = PHI %bb1, %a; %bb2, %b
could be optimized to:

%a = %b
but the code like this:

%c = COPY %z
...
%b = COPY %z
...
%a = PHI %bb1, %a; %bb2, %b; %bb3, %c
would remain unchanged.
With this patch the latter case will be optimized:

%a = %z```.

Committed on behalf of: Anton Afanasyev anton.a.afanasyev@gmail.com

Reviewers: RKSimon, MatzeB

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D54839

llvm-svn: 349271
2018-12-15 14:37:01 +00:00
Simon Pilgrim bfbe510d4f Regenerate neon copy tests. NFCI.
llvm-svn: 349270
2018-12-15 14:23:18 +00:00
Simon Pilgrim 1e1fd9c761 [TargetLowering] Add ISD::OR + ISD::XOR handling to SimplifyDemandedVectorElts
Differential Revision: https://reviews.llvm.org/D55600

llvm-svn: 349264
2018-12-15 11:36:36 +00:00
Kewen Lin 3ac031bb8f [Power9][NFC] add setb exploitation test case
Add an original test case for setb before the exploitation actually takes effect, later we can check the difference.

Differential Revision: https://reviews.llvm.org/D55696

llvm-svn: 349251
2018-12-15 04:39:37 +00:00
Artem Belevich 6d74bd638a [NVPTX] Lower instructions that expand into libcalls.
The change is an effort to split and refactor abandoned
D34708 into smaller parts.

Here the behaviour of unsupported instructions is changed
to match the behaviour of explicit intrinsics calls.
Currently LLVM crashes with:
> Assertion getInstruction() && "Not a call or invoke instruction!" failed.

With this patch LLVM produces a more sensible error message:
> Cannot select: ... i32 = ExternalSymbol'__foobar'

Author: Denys Zariaiev <denys.zariaiev@gmail.com>

Differential Revision: https://reviews.llvm.org/D55145

llvm-svn: 349213
2018-12-14 23:53:06 +00:00
Krzysztof Parzyszek 26d994f56e [Hexagon] Add patterns for shifts of v2i16
This fixes https://llvm.org/PR39983.

llvm-svn: 349202
2018-12-14 22:33:48 +00:00
Volkan Keles 574d737e06 [GlobalISel] LegalizerHelper: Implement fewerElementsVector for G_LOAD/G_STORE
Reviewers: aemerson, dsanders, bogner, paquette, aditya_nandakumar

Reviewed By: dsanders

Subscribers: rovka, kristof.beyls, javed.absar, tschuett, llvm-commits

Differential Revision: https://reviews.llvm.org/D53728

llvm-svn: 349200
2018-12-14 22:11:20 +00:00
Farhana Aleen ce095c564a [AMDGPU] Promote constant offset to the immediate by finding a new base with 13bit constant offset from the nearby instructions.
Summary: Promote constant offset to immediate by recomputing the relative 13bit offset from nearby instructions.
 E.g.
  s_movk_i32 s0, 0x1800
  v_add_co_u32_e32 v0, vcc, s0, v2
  v_addc_co_u32_e32 v1, vcc, 0, v6, vcc

  s_movk_i32 s0, 0x1000
  v_add_co_u32_e32 v5, vcc, s0, v2
  v_addc_co_u32_e32 v6, vcc, 0, v6, vcc
  global_load_dwordx2 v[5:6], v[5:6], off
  global_load_dwordx2 v[0:1], v[0:1], off
  =>
  s_movk_i32 s0, 0x1000
  v_add_co_u32_e32 v5, vcc, s0, v2
  v_addc_co_u32_e32 v6, vcc, 0, v6, vcc
  global_load_dwordx2 v[5:6], v[5:6], off
  global_load_dwordx2 v[0:1], v[5:6], off offset:2048

Author: FarhanaAleen

Reviewed By: arsenm, rampitec

Subscribers: llvm-commits, AMDGPU

Differential Revision: https://reviews.llvm.org/D55539

llvm-svn: 349196
2018-12-14 21:13:14 +00:00
Sanjay Patel 7b776863ac [x86] add tests for extractelement of FP binops; NFC
llvm-svn: 349179
2018-12-14 19:15:54 +00:00
Sanjay Patel b7e2d6e493 [ARM] make test immune to scalarization improvements; NFC
llvm-svn: 349177
2018-12-14 18:47:04 +00:00
Sanjay Patel 4f4963b9cb [x86] make tests immune to scalarization improvements; NFC
llvm-svn: 349176
2018-12-14 18:44:16 +00:00
Daniel Sanders ec29eac5dd [globalisel][combiner] Fix r349167 for release mode bots
This test relies on -debug-only which is unavailable in non-asserts builds.

llvm-svn: 349174
2018-12-14 18:25:05 +00:00
Daniel Sanders 629db5d8e5 [globalisel][combiner] Make the CombinerChangeObserver a MachineFunction::Delegate
Summary:
This allows us to register it with the MachineFunction delegate and be
notified automatically about erasure and creation of instructions. However,
we still need explicit notification for modifications such as those caused
by setReg() or replaceRegWith().

There is a catch with this though. The notification for creation is
delivered before any operands can be added. While appropriate for
scheduling combiner work. This is unfortunate for debug output since an
opcode by itself doesn't provide sufficient information on what happened.
As a result, the work list remembers the instructions (when debug output is
requested) and emits a more complete dump later.

Another nit is that the MachineFunction::Delegate provides const pointers
which is inconvenient since we want to use it to schedule future
modification. To resolve this GISelWorkList now has an optional pointer to
the MachineFunction which describes the scope of the work it is permitted
to schedule. If a given MachineInstr* is in this function then it is
permitted to schedule work to be performed on the MachineInstr's. An
alternative to this would be to remove the const from the
MachineFunction::Delegate interface, however delegates are not permitted
to modify the MachineInstr's they receive.

In addition to this, the observer has three interface changes.
* erasedInstr() is now erasingInstr() to indicate it is about to be erased
  but still exists at the moment.
* changingInstr() and changedInstr() have been added to report changes
  before and after they are made. This allows us to trace the changes
  in the debug output.
* As a convenience changingAllUsesOfReg() and
  finishedChangingAllUsesOfReg() will report changingInstr() and
  changedInstr() for each use of a given register. This is primarily useful
  for changes caused by MachineRegisterInfo::replaceRegWith()

With this in place, both combine rules have been updated to report their
changes to the observer.

Finally, make some cosmetic changes to the debug output and make Combiner
and CombinerHelp

Reviewers: aditya_nandakumar, bogner, volkan, rtereshin, javed.absar

Reviewed By: aditya_nandakumar

Subscribers: mgorny, rovka, kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D52947

llvm-svn: 349167
2018-12-14 17:50:14 +00:00
Sanjay Patel 95f90ef3b3 [AArch64] make test immune to scalarization improvements; NFC
This is explicitly implementing what the comment says rather
than relying on the implicit zext of a costant operand.

llvm-svn: 349166
2018-12-14 17:44:07 +00:00
Sanjay Patel a44dc32708 [SystemZ] make test immune to scalarization improvements; NFC
The undef operands mean this test is probably still too fragile
to accomplish what the comments suggest.

llvm-svn: 349164
2018-12-14 17:28:52 +00:00
Sanjay Patel 25fc03c5c0 [Hexagon] make test immune to scalarization improvements; NFC
llvm-svn: 349163
2018-12-14 17:23:01 +00:00
Sanjay Patel 5a97a105f8 [x86] auto-generate complete checks; NFC
llvm-svn: 349162
2018-12-14 16:49:57 +00:00
Sanjay Patel 41e8112ed6 [x86] regenerate test checks; NFC
llvm-svn: 349161
2018-12-14 16:46:21 +00:00
Sanjay Patel b7d9f9117e [x86] make tests immune to scalarization improvements; NFC
llvm-svn: 349160
2018-12-14 16:44:58 +00:00
Scott Linder de6beb02a5 Implement -frecord-command-line (-frecord-gcc-switches)
Implement options in clang to enable recording the driver command-line
in an ELF section.

Implement a new special named metadata, llvm.commandline, to support
frontends embedding their command-line options in IR/ASM/ELF.

This differs from the GCC implementation in some key ways:

* In GCC there is only one command-line possible per compilation-unit,
  in LLVM it mirrors llvm.ident and multiple are allowed.
* In GCC individual options are separated by NULL bytes, in LLVM entire
  command-lines are separated by NULL bytes. The advantage of the GCC
  approach is to clearly delineate options in the face of embedded
  spaces. The advantage of the LLVM approach is to support merging
  multiple command-lines unambiguously, while handling embedded spaces
  with escaping.

Differential Revision: https://reviews.llvm.org/D54487
Clang Differential Revision: https://reviews.llvm.org/D54489

llvm-svn: 349155
2018-12-14 15:38:15 +00:00
John Brawn 1d0d86ae40 [RegAllocGreedy] IMPLICIT_DEF values shouldn't prefer registers
It costs nothing to spill an IMPLICIT_DEF value (the only spill code that's
generated is a KILL of the value), so when creating split constraints if the
live-out value is IMPLICIT_DEF the exit constraint should be DontCare instead
of PrefReg.

Differential Revision: https://reviews.llvm.org/D55652

llvm-svn: 349151
2018-12-14 14:07:57 +00:00
Diana Picus 02c8343c75 [ARM GlobalISel] Thumb2: casts between int and ptr
Mark as legal and add tests. Nothing special to do.

llvm-svn: 349147
2018-12-14 13:45:38 +00:00
Diana Picus acca60b49e [ARM GlobalISel] Remove duplicate test. NFCI
Fixup for r349026. I forgot to delete these test functions from the
original file when I moved them to arm-legalize-exts.mir.

llvm-svn: 349146
2018-12-14 13:28:34 +00:00
Diana Picus 14dc3b2959 [ARM GlobalISel] Allow simple binary ops in Thumb2
Mark G_ADD, G_SUB, G_MUL, G_AND, G_OR and G_XOR as legal for both ARM
and Thumb2.

Extract the legalizer tests for these opcodes into another file.

Add tests for the instruction selector.

llvm-svn: 349142
2018-12-14 11:58:14 +00:00
Aakanksha Patil bc568766b2 Revert r348971: [AMDGPU] Support for "uniform-work-group-size" attribute
This patch breaks RADV (and probably RadeonSI as well)

llvm-svn: 349084
2018-12-13 21:23:12 +00:00
Matt Arsenault 934e534c47 AMDGPU/GlobalISel: Legalize/regbankselect block_addr
llvm-svn: 349081
2018-12-13 20:34:15 +00:00
Sanjay Patel 791ae69afe [DAGCombiner] after simplifying demanded elements of vector operand of extract, revisit the extract; 2nd try
This is a retry of rL349051 (reverted at rL349056). I changed the check for dead-ness from
number of uses to an opcode test for DELETED_NODE based on existing similar code.

Differential Revision: https://reviews.llvm.org/D55655

llvm-svn: 349058
2018-12-13 17:05:01 +00:00
Simon Pilgrim b5aaa673c6 [X86][SSE] Add SSE vector imm/var shift support to SimplifyDemandedVectorEltsForTargetNode
llvm-svn: 349057
2018-12-13 16:39:29 +00:00
Sanjay Patel c56f5728ee revert rL349051: [DAGCombiner] after simplifying demanded elements of vector operand of extract, revisit the extract
This causes an address sanitizer bot failure:
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/27187/steps/check-llvm%20asan/logs/stdio

llvm-svn: 349056
2018-12-13 16:32:44 +00:00
Simon Pilgrim b0b2f1503a [X86][SSE] Fix all remaining modulo vector rotation amounts (PR38243)
There's still a couple of minor SimplifyDemandedElts regressions in some of the shift amount splats that will be fixed in future patches.

llvm-svn: 349052
2018-12-13 15:50:31 +00:00
Sanjay Patel a7b115b392 [DAGCombiner] after simplifying demanded elements of vector operand of extract, revisit the extract
Differential Revision: https://reviews.llvm.org/D55655

llvm-svn: 349051
2018-12-13 15:44:26 +00:00
Simon Pilgrim ba91ff4a86 [X86][SSE] Fix modulo rotation amounts for v8i16/v16i16/v4i32 (PR38243)
llvm-svn: 349047
2018-12-13 15:23:09 +00:00
Daniel Cederman b5d284408e [Sparc] Use float register for integer constrained with "f" in inline asm
Summary:
Constraining an integer value to a floating point register using "f"
causes an llvm_unreachable to trigger. This patch allows i32 integers
to be placed in a single precision float register and i64 integers to
be placed in a double precision float register. This matches the behavior
of GCC.

For other types the llvm_unreachable is removed to instead trigger an
error message that points out the offending line.

Reviewers: jyknight, venkatra

Reviewed By: jyknight

Subscribers: eraman, fedor.sergeev, jrtc27, llvm-commits

Differential Revision: https://reviews.llvm.org/D51614

llvm-svn: 349045
2018-12-13 15:13:29 +00:00
Jonas Paulsson e79b1b986d [SystemZ] Pass copy-hinted regs first from getRegAllocationHints().
When computing register allocation hints for a GRX32Bit register, make sure
that any of the hinted registers that are also copy hints are returned first
in the list.

Review: Ulrich Weigand.
llvm-svn: 349037
2018-12-13 14:37:05 +00:00
Daniel Sanders 9f3cf55e63 [mir] Serialize DILocation inline when not possible to use a metadata reference
Summary:
Sometimes MIR-level passes create DILocations that were not present in the
LLVM-IR. For example, it may merge two DILocations together to produce a
DILocation that points to line 0.

Previously, the address of these DILocations were printed which prevented the
MIR from being read back into LLVM. With this patch, DILocations will use
metadata references where possible and fall back on serializing them inline like so:
    MOV32mr %stack.0.x.addr, 1, _, 0, _, %0, debug-location !DILocation(line: 1, scope: !15)

Reviewers: aprantl, vsk, arphaman

Reviewed By: aprantl

Subscribers: probinson, llvm-commits

Tags: #debug-info

Differential Revision: https://reviews.llvm.org/D55243

llvm-svn: 349035
2018-12-13 14:25:27 +00:00
Simon Pilgrim 320fd7383f [X86][BWI] Don't custom lower vXi8 rotations.
We always expand to shifts anyhow - test changes are just different scheduling only.

llvm-svn: 349034
2018-12-13 13:44:33 +00:00
Chen Zheng cdbd5bef6d [NFC][PowerPC] add verify-machineinstrs check
After rL349029 and rL348566, sj-ctr-loop.ll is ok for verify-machineinstrs check.

llvm-svn: 349030
2018-12-13 12:55:42 +00:00
Chen Zheng 9c6fa536e0 [PowerPC] intrinsic llvm.eh.sjlj.setjmp should not have flag isBarrier.
Differential Revision: https://reviews.llvm.org/D55499

llvm-svn: 349029
2018-12-13 12:25:20 +00:00
Diana Picus 99cd644b6c [ARM GlobalISel] Support exts and truncs for Thumb2
Mark G_SEXT, G_ZEXT and G_ANYEXT to 32 bits as legal and add support for
them in the instruction selector. This uses handwritten code again
because the patterns that are generated with TableGen are tuned for what
the DAG combiner would produce and not for simple sext/zext nodes.
Luckily, we only need to update the opcodes to use the Thumb2 variants,
everything else can be reused from ARM.

llvm-svn: 349026
2018-12-13 12:06:54 +00:00
Alex Bradbury 919f5fb8ca [RISCV] Add support for the various RISC-V FMA instruction variants
Adds support for the various RISC-V FMA instructions (fmadd, fmsub, fnmsub, fnmadd).

The criteria for choosing whether a fused add or subtract is used, as well as
whether the product is negated or not, is whether some of the arguments to the
llvm.fma.* intrinsic are negated or not. In the tests, extraneous fadd
instructions were added to avoid the negation being performed using a xor
trick, which prevented the proper FMA forms from being selected and thus
tested.

The FMA instruction patterns might seem incorrect (e.g., fnmadd: -rs1 * rs2 -
rs3), but they should be correct. The misleading names were inherited from
MIPS, where the negation happens after computing the sum.

The llvm.fmuladd.* intrinsics still do not generate RISC-V FMA instructions,
as that depends on TargetLowering::isFMAFasterthanFMulAndFAdd.

Some comments in the test files about what type of instructions are there
tested were updated, to better reflect the current content of those test
files.

Differential Revision: https://reviews.llvm.org/D54205
Patch by Luís Marques.

llvm-svn: 349023
2018-12-13 10:49:05 +00:00
Arnaud A. de Grandmaison dfe861087d [AArch64] Catch some more CMN opportunities.
Fixes https://bugs.llvm.org/show_bug.cgi?id=33486

llvm-svn: 349022
2018-12-13 10:31:32 +00:00
Clement Courbet 76f4ae1092 [CodeGen] Allow mempcy/memset to generate small overlapping stores.
Summary:
All targets either just return false here or properly model `Fast`, so I
don't think there is any reason to prevent CodeGen from doing the right
thing here.

Subscribers: nemanjai, javed.absar, eraman, jsji, llvm-commits

Differential Revision: https://reviews.llvm.org/D55365

llvm-svn: 349016
2018-12-13 09:56:19 +00:00
Matt Arsenault 577b9fc543 AMDGPU/GlobalISel: Legalize f64 fadd/fmul
llvm-svn: 349014
2018-12-13 08:27:48 +00:00
Matt Arsenault f38f483bef AMDGPU/GlobalISel: RegBankSelect some simple operations
llvm-svn: 349012
2018-12-13 08:23:51 +00:00
Matt Arsenault 7acf89a21a AMDGPU/GlobalISel: Test cleanups
Remove IR and registers sections

llvm-svn: 349011
2018-12-13 08:11:45 +00:00
Stanislav Mekhanoshin 6071e1aa58 [AMDGPU] Simplify negated condition
Optimize sequence:

  %sel = V_CNDMASK_B32_e64 0, 1, %cc
  %cmp = V_CMP_NE_U32 1, %1
  $vcc = S_AND_B64 $exec, %cmp
  S_CBRANCH_VCC[N]Z
=>
  $vcc = S_ANDN2_B64 $exec, %cc
  S_CBRANCH_VCC[N]Z

It is the negation pattern inserted by DAGCombiner::visitBRCOND() in the
rebuildSetCC().

Differential Revision: https://reviews.llvm.org/D55402

llvm-svn: 349003
2018-12-13 03:17:40 +00:00
Craig Topper d1c61861dd [X86] Don't emit MULX by default with BMI2
MULX has somewhat improved register allocation constraints compared to the legacy MUL instruction. Both output registers are encoded instead of fixed to EAX/EDX, but EDX is used as input. It also doesn't touch flags. Unfortunately, the encoding is longer.

Prefering it whenever BMI2 is enabled is probably not optimal. Choosing it should somehow be a function of register allocation constraints like converting adds to three address. gcc and icc definitely don't pick MULX by default. Not sure what if any rules they have for using it.

Differential Revision: https://reviews.llvm.org/D55565

llvm-svn: 348975
2018-12-12 21:21:31 +00:00
Craig Topper cd7d7ac0fd [X86] Move stack folding test for MULX to a MIR test. Add a MULX32 case as well
A future patch may stop using MULX by default so use MIR to ensure we're always testing MULX.

Add the 32-bit case that we couldn't do in the 64-bit mode IR test due to it being promoted to a 64-bit mul.

llvm-svn: 348972
2018-12-12 20:50:24 +00:00
Aakanksha Patil 729309cc89 [AMDGPU] Support for "uniform-work-group-size" attribute
Updated the annotate-kernel-features pass to support the propagation of uniform-work-group attribute from the kernel to the called functions. Once this pass is run, all kernels, even the ones which initially did not have the attribute, will be able to indicate weather or not they have uniform work group size depending on the value of the attribute. 

Differential Revision: https://reviews.llvm.org/D50200

llvm-svn: 348971
2018-12-12 20:49:17 +00:00
Simon Pilgrim 4a641efdc1 [X86] Added missing constant pool checks. NFCI.
So the extra checks in D55600 don't look like a regression.

llvm-svn: 348966
2018-12-12 19:56:38 +00:00
Scott Linder f5b36e56fb [AMDGPU] Emit MessagePack HSA Metadata for v3 code object
Continue to present HSA metadata as YAML in ASM and when output by tools
(e.g. llvm-readobj), but encode it in Messagepack in the code object.

Differential Revision: https://reviews.llvm.org/D48179

llvm-svn: 348963
2018-12-12 19:39:27 +00:00
Craig Topper 4937adf75f [X86] Emit SBB instead of SETCC_CARRY from LowerSELECT. Break false dependency on the SBB input.
I'm hoping we can just replace SETCC_CARRY with SBB. This is another step towards that.

I've explicitly used zero as the input to the setcc to avoid a false dependency that we've had with the SETCC_CARRY. I changed one of the patterns that used NEG to instead use an explicit compare with 0 on the LHS. We needed the zero anyway to avoid the false dependency. The negate would clobber its input register. By using a CMP we can avoid that which could be useful.

Differential Revision: https://reviews.llvm.org/D55414

llvm-svn: 348959
2018-12-12 19:20:21 +00:00
Simon Pilgrim 5864ab2dc0 [X86] Added missing constant pool checks. NFCI.
So the extra checks in D55600 don't look like a regression.

llvm-svn: 348956
2018-12-12 18:53:12 +00:00
Artem Belevich f802b9324a [NVPTX] do not rely on cached subtarget info.
If a module has function references, but no functions
themselves, we may end up never calling runOnMachineFunction
and therefore would never initialize nvptxSubtarget field
which would eventually cause a crash.

Instead of relying on nvptxSubtarget being initialized by
one of the methods, retrieve subtarget info directly.

Differential Revision: https://reviews.llvm.org/D55580

llvm-svn: 348952
2018-12-12 18:31:04 +00:00
Sanjay Patel 44eaa492b8 [x86] allow 8-bit adds to be promoted by convertToThreeAddress() to form LEA
This extends the code that handles 16-bit add promotion to form LEA to also allow 8-bit adds. 
That allows us to combine add ops with register moves and save some instructions. This is 
another step towards allowing add truncation in generic DAGCombiner (see D54640).

Differential Revision: https://reviews.llvm.org/D55494

llvm-svn: 348946
2018-12-12 17:58:27 +00:00
Neil Henning 76504a4c5e [AMDGPU] Extend the SI Load/Store optimizer to combine more things.
I've extended the load/store optimizer to be able to produce dwordx3
loads and stores, This change allows many more load/stores to be combined,
and results in much more optimal code for our hardware.

Differential Revision: https://reviews.llvm.org/D54042

llvm-svn: 348937
2018-12-12 16:15:21 +00:00
Simon Pilgrim f6c898e12f [TargetLowering] Add ISD::AND handling to SimplifyDemandedVectorElts
If either of the operand elements are zero then we know the result element is going to be zero (even if the other element is undef).

Differential Revision: https://reviews.llvm.org/D55558

llvm-svn: 348926
2018-12-12 13:43:07 +00:00
Simon Pilgrim 125d9b0907 Regenerate knownbits test. NFCI.
A future SimplifyDemandedBits patch will affect this code and I want to ensure the codegen diff is obvious.

llvm-svn: 348925
2018-12-12 13:21:03 +00:00
Piotr Sobczak 3732b4ce25 [AMDGPU] Set metadata access for explicit section
Summary:
This patch provides a means to set Metadata section kind
for a global variable, if its explicit section name is
prefixed with ".AMDGPU.metadata."
This could be useful to make the global variable go to
an ELF section without any section flags set.

Reviewers: dstuttard, tpr, kzhuravl, nhaehnle, t-tye

Reviewed By: dstuttard, kzhuravl

Subscribers: llvm-commits, arsenm, jvesely, wdng, yaxunl, t-tye

Differential Revision: https://reviews.llvm.org/D55267

llvm-svn: 348922
2018-12-12 11:20:04 +00:00
Diana Picus 59720b422a [ARM GlobalISel] Select load/store for Thumb2
Unfortunately we can't use TableGen for this because it doesn't yet
support predicates on the source pattern root. Therefore, add a bit of
handwritten code to the instruction selector to handle the most basic
cases.

Also mark them as legal and extract their legalizer test cases to a new
test file.

llvm-svn: 348920
2018-12-12 10:32:15 +00:00
Leonard Chan 118e53fd63 [Intrinsic] Signed Fixed Point Multiplication Intrinsic
Add an intrinsic that takes 2 signed integers with the scale of them provided
as the third argument and performs fixed point multiplication on them.

This is a part of implementing fixed point arithmetic in clang where some of
the more complex operations will be implemented as intrinsics.

Differential Revision: https://reviews.llvm.org/D54719

llvm-svn: 348912
2018-12-12 06:29:14 +00:00
Craig Topper 1fe466689b [X86] Combine vpmovdw+vpacksswb into vpmovdb.
This is similar to the combine we already have for vpmovdw+vpackuswb.

llvm-svn: 348910
2018-12-12 05:56:01 +00:00
Craig Topper 5b69b5e20a [X86] Add a few more fptosi test cases to demonstrate -x86-experimental-vector-widening legalization not combining vpacksswb+vpmovdw.
We are able to combine vpackuswb+vpmovdw, but we didn't have packsswb+vpmovdw at the time that combine was added.

llvm-svn: 348909
2018-12-12 05:55:59 +00:00
Craig Topper b51283bfd7 Fix not correct imm operand assertion for SUB32ri in X86CondBrFolding::analyzeCompare
Summary:
When doing X86CondBrFolding::analyzeCompare, it will meet the SUB32ri instruction as below to use the global address for its operand,
  %733:gr32 = SUB32ri %62:gr32(tied-def 0), @img2buf_normal, implicit-def $eflags
  JNE_1 %bb.41, implicit $eflags

so the assertion "assert(MI.getOperand(ValueIndex).isImm() && "Expecting Imm operand")" is not correct and change the assert to if make X86CondBrFolding::analyzeCompare return false as not finding the compare for this

Patch by Jianping Chen

Reviewers: smaslov, LuoYuanke, liutianle, Jianping

Reviewed By: Jianping

Subscribers: lebedev.ri, llvm-commits

Differential Revision: https://reviews.llvm.org/D54250

llvm-svn: 348853
2018-12-11 15:32:14 +00:00
Clement Courbet 8b6434bbb9 Revert r348843 "[CodeGen] Allow mempcy/memset to generate small overlapping stores."
Breaks ARM/memcpy-inline.ll

llvm-svn: 348844
2018-12-11 13:38:43 +00:00
Clement Courbet 93b3445770 [CodeGen] Allow mempcy/memset to generate small overlapping stores.
Summary:
All targets either just return false here or properly model `Fast`, so I
don't think there is any reason to prevent CodeGen from doing the right
thing here.

Subscribers: nemanjai, javed.absar, eraman, jsji, llvm-commits

Differential Revision: https://reviews.llvm.org/D55365

llvm-svn: 348843
2018-12-11 13:15:56 +00:00
Simon Pilgrim f6371f5f23 [TargetLowering] Add ISD::EXTRACT_VECTOR_ELT support to SimplifyDemandedBits
Let SimplifyDemandedBits attempt to simplify all elements of a vector extraction.

Part of PR39689.

llvm-svn: 348839
2018-12-11 11:08:40 +00:00
Craig Topper 4bd93fa5bb [X86] Switch the 64-bit mulx schedule test to use inline assembly.
I'm not sure we should always prefer MULX over MUL. So making the MULX guaranteed with inline assembly.

llvm-svn: 348833
2018-12-11 07:41:06 +00:00
Heejin Ahn be5e5874f6 [WebAssembly] Add '.eventtype' directive support
Summary:
This patch supports `.eventtype` directive printing and parsing in the
same syntax with `.functype`.

Reviewers: aardappel, sbc100

Subscribers: dschuff, sbc100, jgravelle-google, sunfish, llvm-commits

Differential Revision: https://reviews.llvm.org/D55353

llvm-svn: 348818
2018-12-11 01:11:04 +00:00
Krzysztof Parzyszek 9f003f9262 [Hexagon] Couple of fixes in optimize addressing mode
- Check if an operand is an immediate before calling getImm. Some operands
  that take constant values can actually have global symbols or other
  constant expressions.
- When a load-constant instruction can be folded into users, make sure to
  only delete it when all users have been successfully converted.

llvm-svn: 348802
2018-12-10 21:56:04 +00:00
David Green bd72be0b44 [Targets] Fixup incorrect targets in codemodel tests
llvm-svn: 348796
2018-12-10 20:55:34 +00:00
Krzysztof Parzyszek c1b2d5905a Revert "[Hexagon] Check if operand is an immediate before getImm"
This reverts r348787. The patch wasn't quite correct.

llvm-svn: 348792
2018-12-10 19:30:08 +00:00
Amara Emerson 5ec146046c [GlobalISel] Restrict G_MERGE_VALUES capability and replace with new opcodes.
This patch restricts the capability of G_MERGE_VALUES, and uses the new
G_BUILD_VECTOR and G_CONCAT_VECTORS opcodes instead in the appropriate places.

This patch also includes AArch64 support for selecting G_BUILD_VECTOR of <4 x s32>
and <2 x s64> vectors.

Differential Revisions: https://reviews.llvm.org/D53629

llvm-svn: 348788
2018-12-10 18:44:58 +00:00
Krzysztof Parzyszek c6e9380a56 [Hexagon] Check if operand is an immediate before getImm
llvm-svn: 348787
2018-12-10 18:39:47 +00:00