Commit Graph

13549 Commits

Author SHA1 Message Date
Craig Topper 69f8c1653d [ScalarizeMaskedMemIntrin] Use IRBuilder functions that take uint32_t/uint64_t for getelementptr, extractelement, and insertelement.
This saves needing to call getInt32 ourselves. Making the code a little shorter.

The test changes are because insert/extract use getInt64 internally. Shouldn't be a functional issue.

This cleanup because I plan to write similar code for expandload/compressstore.

llvm-svn: 355767
2019-03-09 02:08:41 +00:00
Sanjay Patel f84083b4db [x86] scalarize extract element 0 of FP cmp
An extension of D58282 noted in PR39665:
https://bugs.llvm.org/show_bug.cgi?id=39665

This doesn't answer the request to use movmsk, but that's an
independent problem. We need this and probably still need
scalarization of FP selects because we can't do that as a
target-independent transform (although it seems likely that
targets besides x86 should have this transform).

llvm-svn: 355741
2019-03-08 21:54:41 +00:00
Sanjay Patel 43f098e719 [x86] add tests for extracted vector FP cmp; NFC
llvm-svn: 355727
2019-03-08 20:45:27 +00:00
Amaury Sechet 782ac933b5 [DAGCombiner] fold (add (add (xor a, -1), b), 1) -> (sub b, a)
Summary: This pattern is sometime created after legalization.

Reviewers: efriedma, spatel, RKSimon, zvi, bkramer

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58874

llvm-svn: 355716
2019-03-08 19:39:32 +00:00
Sanjay Patel b22f438df3 [x86] prevent infinite looping from inverse shuffle transforms
llvm-svn: 355713
2019-03-08 19:20:28 +00:00
Simon Pilgrim 53652feab7 [X86] Add test case for PR22473
llvm-svn: 355712
2019-03-08 19:16:26 +00:00
Simon Pilgrim 00ab0339ed Fix typo in constant vector
llvm-svn: 355699
2019-03-08 15:17:26 +00:00
Clement Courbet 8e16d73346 [SelectionDAG] Allow the user to specify a memeq function.
Summary:
Right now, when we encounter a string equality check,
e.g. `if (memcmp(a, b, s) == 0)`, we try to expand to a comparison if `s` is a
small compile-time constant, and fall back on calling `memcmp()` else.

This is sub-optimal because memcmp has to compute much more than
equality.

This patch replaces `memcmp(a, b, s) == 0` by `bcmp(a, b, s) == 0` on platforms
that support `bcmp`.

`bcmp` can be made much more efficient than `memcmp` because equality
compare is trivially parallel while lexicographic ordering has a chain
dependency.

Subscribers: fedor.sergeev, jyknight, ckennelly, gchatelet, llvm-commits

Differential Revision: https://reviews.llvm.org/D56593

llvm-svn: 355672
2019-03-08 09:07:45 +00:00
Craig Topper 4505c99e72 [X86] Improve the type checking in isLegalMaskedLoad and isLegalMaskedGather.
We were just checking pointer size and type primitive size. But this caused unintended things like vectors of half being accepted by masked load/store.

For FP we now explicitly check for only double and float.

For pointers we now let any pointer through. Trusting that only 32 and 64 would be used to generate assembly.

We only check bitwidth after checking that the type is an integer.

llvm-svn: 355667
2019-03-08 07:33:43 +00:00
Sanjay Patel 5ed14ef1e4 [x86] add extract FP tests for target-specific nodes; NFC
llvm-svn: 355655
2019-03-07 23:55:54 +00:00
Vlad Tsyrklevich 2e1479e2f2 Delete x86_64 ShadowCallStack support
Summary:
ShadowCallStack on x86_64 suffered from the same racy security issues as
Return Flow Guard and had performance overhead as high as 13% depending
on the benchmark. x86_64 ShadowCallStack was always an experimental
feature and never shipped a runtime required to support it, as such
there are no expected downstream users.

Reviewers: pcc

Reviewed By: pcc

Subscribers: mgorny, javed.absar, hiraditya, jdoerfert, cfe-commits, #sanitizers, llvm-commits

Tags: #clang, #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D59034

llvm-svn: 355624
2019-03-07 18:56:36 +00:00
Craig Topper 3acc4236b8 [X86] Enable combineFMinNumFMaxNum for 512 bit vectors when AVX512 is enabled.
Simplified by just checking if the vector type is legal rather than listing all combinations of types and features.

Fixes PR40984.

llvm-svn: 355582
2019-03-07 06:30:19 +00:00
Craig Topper a0dd6e9a08 [X86] Add 512-bit fminnum/maxnum test cases for PR40984. Also add v8f32 minnum/maxnum tests. NFC
llvm-svn: 355581
2019-03-07 05:56:52 +00:00
Nikita Popov e1012e1efb [X86] Add vector mulo with power of two operand tests; NFC
llvm-svn: 355544
2019-03-06 20:25:49 +00:00
Paul Robinson 05efe0fdc4 [PS4] Emit a trap after a stack-protector fail call.
llvm-svn: 355542
2019-03-06 19:57:43 +00:00
Simon Pilgrim cdf95f8f07 [DAGCombiner] Enable UADDO/USUBO vector combine support
Differential Revision: https://reviews.llvm.org/D58965

llvm-svn: 355517
2019-03-06 16:11:03 +00:00
Alexander Kornienko 3d467a890e Revert "[CodeGen] Omit range checks from jump tables when lowering switches with unreachable default"
This reverts commit 2a0f2c5ef3 (r355490).

The commit causes an assertion failure when compiling LLVM code:
$ cat repro.cpp
class QQQ {
public:
  bool x() const;
  bool y() const;
  unsigned getSizeInBits() const {
    if (y() || x())
      return getScalarSizeInBits();
    return getScalarSizeInBits() * 2;
  }
  unsigned getScalarSizeInBits() const;
};
int f(const QQQ &Ty) {
  switch (Ty.getSizeInBits()) {
    case 1:
    case 8:
      return 0;
    case 16:
      return 1;
    case 32:
      return 2;
    case 64:
      return 3;
    default:
      __builtin_unreachable();
  }
}
$ clang -O2 -o repro.o repro.cpp
assert.h assertion failed at llvm/include/llvm/ADT/ilist_iterator.h:139 in llvm::ilist_iterator::reference llvm::ilist_iterator<llvm::ilist_detail::node_options<llvm::MachineInstr, true, true, void>, true, false>::operator*() const [OptionsT = llvm::ilist_detail::node_options<llvm::MachineInstr, true, true, void>, IsReverse = true, IsConst = false]: !NodePtr->isKnownSentinel()
*** Check failure stack trace: ***
    @     0x558aab4afc10  __assert_fail
    @     0x558aa885479b  llvm::ilist_iterator<>::operator*()
    @     0x558aa8854715  llvm::MachineInstrBundleIterator<>::operator*()
    @     0x558aa92c33c3  llvm::X86InstrInfo::optimizeCompareInstr()
    @     0x558aa9a9c251  (anonymous namespace)::PeepholeOptimizer::optimizeCmpInstr()
    @     0x558aa9a9b371  (anonymous namespace)::PeepholeOptimizer::runOnMachineFunction()
    @     0x558aa99a4fc8  llvm::MachineFunctionPass::runOnFunction()
    @     0x558aab019fc4  llvm::FPPassManager::runOnFunction()
    @     0x558aab01a3a5  llvm::FPPassManager::runOnModule()
    @     0x558aab01aa9b  (anonymous namespace)::MPPassManager::runOnModule()
    @     0x558aab01a635  llvm::legacy::PassManagerImpl::run()
    @     0x558aab01afe1  llvm::legacy::PassManager::run()
    @     0x558aa5914769  (anonymous namespace)::EmitAssemblyHelper::EmitAssembly()
    @     0x558aa5910f44  clang::EmitBackendOutput()
    @     0x558aa5906135  clang::BackendConsumer::HandleTranslationUnit()
    @     0x558aa6d165ad  clang::ParseAST()
    @     0x558aa6a94e22  clang::ASTFrontendAction::ExecuteAction()
    @     0x558aa590255d  clang::CodeGenAction::ExecuteAction()
    @     0x558aa6a94840  clang::FrontendAction::Execute()
    @     0x558aa6a38cca  clang::CompilerInstance::ExecuteAction()
    @     0x558aa4e2294b  clang::ExecuteCompilerInvocation()
    @     0x558aa4df6200  cc1_main()
    @     0x558aa4e1b37f  ExecuteCC1Tool()
    @     0x558aa4e1a725  main
    @     0x7ff20d56abbd  __libc_start_main
    @     0x558aa4df51c9  _start

llvm-svn: 355515
2019-03-06 15:23:50 +00:00
Simon Pilgrim 1bdc2d1874 [DAGCombiner] Add SADDO/SSUBO combine support
Basic constant handling folds, for both scalars and vectors

Differential Revision: https://reviews.llvm.org/D58967

llvm-svn: 355506
2019-03-06 14:22:21 +00:00
Roman Lebedev 4764310505 [X86][NFC] Autogenerate check lines in cmovcmov.ll test
Investigating 8-bit cmov promotion, this test comes up.

llvm-svn: 355496
2019-03-06 11:47:43 +00:00
Simon Pilgrim 642f53d292 [DAGCombiner] Enable SMULO/UMULO vector combine support (PR40442)
Differential Revision: https://reviews.llvm.org/D58968

llvm-svn: 355495
2019-03-06 11:04:21 +00:00
Simon Pilgrim 468bb2e601 [X86][SSE] VSELECT(XOR(Cond,-1), LHS, RHS) --> VSELECT(Cond, RHS, LHS)
As noticed on D58965

DAGCombiner::visitSELECT has something similar, so we should be able to move this to DAGCombiner and support VSELECT as well at some point.

Differential Revision: https://reviews.llvm.org/D58974

llvm-svn: 355494
2019-03-06 10:54:43 +00:00
Ayonam Ray 2a0f2c5ef3 [CodeGen] Omit range checks from jump tables when lowering switches with unreachable default
During the lowering of a switch that would result in the generation of a
jump table, a range check is performed before indexing into the jump
table, for the switch value being outside the jump table range and a
conditional branch is inserted to jump to the default block. In case the
default block is unreachable, this conditional jump can be omitted. This
patch implements omitting this conditional branch for unreachable
defaults.

Differential Revision: https://reviews.llvm.org/D52002
Reviewers: Hans Wennborg, Eli Freidman, Roman Lebedev

llvm-svn: 355490
2019-03-06 10:01:02 +00:00
Ayonam Ray af92b7a3b8 Reversing the commit of revision 355483 since it is giving a regression on a newly added test.
llvm-svn: 355487
2019-03-06 07:51:28 +00:00
Craig Topper c0e01d29a4 [X86] Enable the add with 128 -> sub with -128 encoding trick with X86ISD::ADD when the carry flag isn't used.
This allows us to use an 8-bit sign extended immediate instead of a 16 or 32 bit immediate.

Also do similar for 0x80000000 with 64-bit adds to avoid having to use a movabsq.

llvm-svn: 355485
2019-03-06 07:36:38 +00:00
Craig Topper 97a1c4c340 [X86] Suppress load folding for add/sub with 128 immediate.
128 won't fit in a sign extended 8-bit immediate, but we can negate it to -128 and use the other operation. This results in a shorter encoding since the move would have used 16 or 32 bits for the immediate.

llvm-svn: 355484
2019-03-06 07:36:36 +00:00
Ayonam Ray 6025fa8e30 [CodeGen] Omit range checks from jump tables when lowering switches with unreachable default
During the lowering of a switch that would result in the generation of a
jump table, a range check is performed before indexing into the jump
table, for the switch value being outside the jump table range and a
conditional branch is inserted to jump to the default block. In case the
default block is unreachable, this conditional jump can be omitted. This
patch implements omitting this conditional branch for unreachable
defaults.

Differential Revision: https://reviews.llvm.org/D52002
Reviewers: Hans Wennborg, Eli Freidman, Roman Lebedev

llvm-svn: 355483
2019-03-06 07:27:45 +00:00
Roman Lebedev 98d412ff13 [X86][NFC] Add proper test for promotion of i8 cmov's of trunc's
There was no proper test for that code in X86TargetLowering::LowerSELECT().
Noticed accidentally while trying to modify the last branch in that function.

llvm-svn: 355452
2019-03-05 22:43:53 +00:00
Roman Lebedev c38831e11d [NFC][CodeGen][X86][AArch64] Add tests for C++ std::midpoint() pattern (PR40965)
Tests only for integers, not floating point or pointers.

The scalar 8-bit case uses branch instead of CMOV,
because there is no no 8-bit CMOV.

Vector tests are for consistency, since it can be vectorized.

https://bugs.llvm.org/show_bug.cgi?id=40965

llvm-svn: 355436
2019-03-05 20:18:47 +00:00
Craig Topper 4a9dd7c39b [X86] Enable 8-bit SHL to convert to LEA
Differential Revision: https://reviews.llvm.org/D58870

llvm-svn: 355425
2019-03-05 18:37:41 +00:00
Craig Topper 216bf7f03b [X86] Allow 8-bit INC/DEC to be converted to LEA.
We already do this for 16/32/64 as well as 8-bit add with register/immediate. Might as well do it for 8-bit INC/DEC too.

Differential Revision: https://reviews.llvm.org/D58869

llvm-svn: 355424
2019-03-05 18:37:37 +00:00
Craig Topper 572e94ca02 [X86] Enable 8-bit OR with disjoint bits to convert to LEA
We already support 8-bits adds in convertToThreeAddress. But we can also support 8-bit OR if the bits are disjoint. We already do this for 16/32/64.

Differential Revision: https://reviews.llvm.org/D58863

llvm-svn: 355423
2019-03-05 18:37:33 +00:00
Simon Pilgrim 40441aa86a [X86][SSE] Regenerate vector zero tests
llvm-svn: 355412
2019-03-05 16:52:14 +00:00
Simon Pilgrim f011e53a78 [X86] Add SMULO/UMULO combine tests
Include scalar and vector test variants covering the folds in DAGCombiner (vector isn't currently supported - PR40442)

llvm-svn: 355407
2019-03-05 15:36:45 +00:00
Simon Pilgrim 65676571e1 Fix typo in constant vector
llvm-svn: 355405
2019-03-05 15:06:01 +00:00
Simon Pilgrim a3d06ccd5e [X86] Add SADDO/UADDO and SSUBO/USUBO combine tests
Include scalar and vector test variants covering the folds in DAGCombiner (vector isn't currently supported - PR40442)

llvm-svn: 355404
2019-03-05 14:52:42 +00:00
Simon Pilgrim 4d93b9c75c [X86] Add test cases for D58874
Add scalar and vector test cases for missing (add (add (xor a, -1), b), 1) -> (sub b, a) fold

llvm-svn: 355400
2019-03-05 13:52:09 +00:00
Craig Topper 509a8a3cf1 [DAGCombiner][X86][SystemZ][AArch64] Combine some cases of (bitcast (build_vector constants)) between legalize types and legalize dag.
This patch enables combining integer bitcasts of integer build vectors when the new scalar type is legal. I've avoided floating point because the implementation bitcasts float to int along the way and we would need to check the intermediate types for legality

Differential Revision: https://reviews.llvm.org/D58884

llvm-svn: 355324
2019-03-04 19:12:16 +00:00
Simon Pilgrim eeb1144d27 [X86] Regenerate illegal type load test with non-undef load address.
This would be affected by an upcoming patch without undoing some of the bugpoint reduction.

llvm-svn: 355316
2019-03-04 14:49:02 +00:00
Jeremy Morse 09d8ea5282 [X86] Avoid codegen changes when DBG_VALUE appears between lowered selects
X86TargetLowering::EmitLoweredSelect presently detects sequences of CMOV pseudo
instructions without accounting for debug intrinsics. This leads to different
codegen with and without option -g, if a DBG_VALUE instruction lands in the
middle of several lowered selects.

Work around this by skipping over debug instructions when looking for CMOV
sequences, and sinking those debug insts into the EmitLoweredSelect sunk block.
This might slightly shift where variables appear in the instruction sequence,
but won't re-order assignments.

Differential Revision: https://reviews.llvm.org/D58672

llvm-svn: 355307
2019-03-04 10:56:02 +00:00
Craig Topper e9e4a0f5b4 [X86] Regenerate test to get the full FP operands printed. NFC
Missed when I updated the printer to print implicit %st operand on binops.

llvm-svn: 355295
2019-03-03 20:28:52 +00:00
Simon Pilgrim d8e91a54c0 [X86] getShuffleScalarElt - peek through insert/extract subvector nodes.
llvm-svn: 355288
2019-03-03 14:11:05 +00:00
Craig Topper ce68659772 [X86] Prefer VPBLENDD for v2i64/v4i64 blends with AVX2.
We were using VPBLENDW for v2i64 and VBLENDPD for v4i64. VPBLENDD has better throughput than VPBLENDW on some CPUs so it makes sense to use it when possible. VBLENDPD will probably become VBLENDD during execution domain fixing, but we might as well use integer in isel while we can.

This should work around some issues with the domain fixing pass prefering PBLENDW when we start with PBLENDW. There may still be some v8i16 cases that could use PBLENDD.

llvm-svn: 355281
2019-03-03 00:18:07 +00:00
Amaury Sechet 31291a403c Add test case for add to sub transformation. NFC
llvm-svn: 355269
2019-03-02 14:28:59 +00:00
Amaury Sechet f24abf6511 [X86] Improve use of SHLD/SHRD
Summary:
This extends the variety of pattern that can generate a SHLD instead of using two shifts.

This fixes a regression that would be introduced by D57367 or D33587

Reviewers: RKSimon, craig.topper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D57389

llvm-svn: 355260
2019-03-02 02:44:16 +00:00
Amaury Sechet 1cc0f6061f Add test case for truncate funnel shifts. NFC
llvm-svn: 355258
2019-03-02 02:24:36 +00:00
Simon Pilgrim 1a059e6619 [X86] Regenerate legalize test files
Noticed while getting update_mir_test_checks.py to work on python3

llvm-svn: 355198
2019-03-01 13:13:40 +00:00
Sanjay Patel 7fc6ef7dd7 [x86] scalarize extract element 0 of FP math
This is another step towards ensuring that we produce the optimal code for reductions,
but there are other potential benefits as seen in the tests diffs:

  1. Memory loads may get scalarized resulting in more efficient code.
  2. Memory stores may get scalarized resulting in more efficient code.
  3. Complex ops like fdiv/sqrt get scalarized which may be faster instructions depending on uarch.
  4. Even simple ops like addss/subss/mulss/roundss may result in faster operation/less frequency throttling when scalarized depending on uarch.

The TODO comment suggests 1 or more follow-ups for opcodes that can currently result in regressions.

Differential Revision: https://reviews.llvm.org/D58282

llvm-svn: 355130
2019-02-28 19:47:04 +00:00
Craig Topper 8b1703fc1d [X86] Add test case that was supposed to go with r355116.
llvm-svn: 355117
2019-02-28 18:50:16 +00:00
Simon Pilgrim 87aeff8bbb [X86][AVX] Fold vf64 concat_vectors(movddup(x),movddup(x)) -> broadcast(x)
llvm-svn: 355078
2019-02-28 10:53:58 +00:00
Simon Pilgrim 71bb6850cf [X86][AVX] Only combine loads to broadcasts for legal types
Thanks to @echristo for spotting this.

llvm-svn: 354961
2019-02-27 11:17:25 +00:00
Reid Kleckner 8fda7e15e6 [X86] Fix bug in vectorcall calling convention
Original implementation can't correctly handle __m256 and __m512 types
passed by reference through stack. This patch fixes it.

Patch by Wei Xiao!

Differential Revision: https://reviews.llvm.org/D57643

llvm-svn: 354921
2019-02-26 19:48:16 +00:00
Ganesh Gopalasubramanian e172d7008d [X86] AMD znver2 enablement
This patch enables the following

1) AMD family 17h "znver2" tune flag (-march, -mcpu).
2) ISAs that are enabled for "znver2" architecture.
3) For the time being, it uses the znver1 scheduler model.
4) Tests are updated.
5) Scheduler descriptions are yet to be put in place.

Reviewers: craig.topper

Differential Revision: https://reviews.llvm.org/D58343

llvm-svn: 354897
2019-02-26 16:55:10 +00:00
Nirav Dave 582d46328c [DAG] Fix constant store folding to handle non-byte sizes.
Avoid crashes from zero-byte values due to sub-byte store sizes.

Reviewers: uabelho, courbet, rnk

Reviewed By: courbet

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58626

llvm-svn: 354884
2019-02-26 15:02:32 +00:00
Simon Pilgrim 810fa04ac7 [LegalizeDAG] Expand SADDO/SSUBO using SADDSAT/SSUBSAT (PR37763)
If SADDSAT/SSUBSAT are legal, then we can expand SADDO/SSUBO by performing a ADD/SUB and a SADDO/SSUBO and then compare the results.

I looked at doing this for UADDO/USUBO as well but as we don't have to do as many range comparisons I didn't see any/much benefit.

Differential Revision: https://reviews.llvm.org/D58637

llvm-svn: 354866
2019-02-26 11:27:53 +00:00
Reid Kleckner 2f055f026a [X86] Fix bug in x86_intrcc with arg copy elision
Summary:
Use a custom calling convention handler for interrupts instead of fixing
up the locations in LowerMemArgument. This way, the offsets are correct
when constructed and we don't need to account for them in as many
places.

Depends on D56883

Replaces D56275

Reviewers: craig.topper, phil-opp

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D56944

llvm-svn: 354837
2019-02-26 02:11:25 +00:00
Craig Topper 316c58e8f1 [X86] Improve detection of unneeded shift amount masking to also handle the case that the LHS has known zeroes in it
If the LHS has known zeros, the RHS immediate will have had bits removed. So call computeKnownBits to get the known zeroes so we can handle this case.

Differential Revision: https://reviews.llvm.org/D58475

llvm-svn: 354811
2019-02-25 19:42:47 +00:00
Simon Pilgrim 28441ac75f [DAGCombine] Add undef shuffle elt support to partitionShuffleOfConcats
Support undef shuffle mask indices in the shuffle(concat_vectors, concat_vectors) -> concat_vectors fold

Differential Revision: https://reviews.llvm.org/D58585

llvm-svn: 354793
2019-02-25 16:02:01 +00:00
Dmitri Gribenko 751c5fbf6a Fixed typos in tests: s/CEHCK/CHECK/
Reviewers: ilya-biryukov

Subscribers: sanjoy, sdardis, javed.absar, jrtc27, atanasyan, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58608

llvm-svn: 354781
2019-02-25 13:12:33 +00:00
Simon Pilgrim c61f1e8e6c [X86] Merge ISD::ADD/SUB nodes into X86ISD::ADD/SUB equivalents (PR40483)
Avoid ADD/SUB instruction duplication by reusing the X86ISD::ADD/SUB results.

Includes ADD commutation - I tried to include NEG+SUB SUB commutation as well but this causes regressions as we don't have good combine coverage to simplify X86ISD::SUB.

Differential Revision: https://reviews.llvm.org/D58597

llvm-svn: 354771
2019-02-25 11:19:37 +00:00
Simon Pilgrim f43c48cb52 [X86] Add PR40483 test cases
Demonstrate failure to merge ISD::ADD(x,y)/X86ISD::ADD(x,y) + ISD::SUB(x,y)/X86ISD::SUB(x,y) equivalent ops

llvm-svn: 354758
2019-02-24 21:13:29 +00:00
Simon Pilgrim cfaf663a35 [X86] Combine zext(packus(x),packus(y)) -> concat(x,y) (PR39637)
Its proving tricky to combine shuffles across multiple vector sizes, so for now I'm adding this more specific combine - the pattern is common enough to be worth it as a first step.

llvm-svn: 354757
2019-02-24 19:57:52 +00:00
Craig Topper 3fe4bd464c [X86] Fix tls variable lowering issue with large code model
Summary:
The problem here is the lowering for tls variable. Below is the DAG for the code.
SelectionDAG has 11 nodes:

t0: ch = EntryToken
      t8: i64,ch = load<(load 8 from `i8 addrspace(257)* null`, addrspace 257)> t0, Constant:i64<0>, undef:i64
        t10: i64 = X86ISD::WrapperRIP TargetGlobalTLSAddress:i64<i32* @x> 0 [TF=10]
      t11: i64,ch = load<(load 8 from got)> t0, t10, undef:i64
    t12: i64 = add t8, t11
  t4: i32,ch = load<(dereferenceable load 4 from @x)> t0, t12, undef:i64
t6: ch = CopyToReg t0, Register:i32 %0, t4
And when mcmodel is large, below instruction can NOT be folded.

  t10: i64 = X86ISD::WrapperRIP TargetGlobalTLSAddress:i64<i32* @x> 0 [TF=10]
t11: i64,ch = load<(load 8 from got)> t0, t10, undef:i64
So "t11: i64,ch = load<(load 8 from got)> t0, t10, undef:i64" is lowered to " Morphed node: t11: i64,ch = MOV64rm<Mem:(load 8 from got)> t10, TargetConstant:i8<1>, Register:i64 $noreg, TargetConstant:i32<0>, Register:i32 $noreg, t0"

When llvm start to lower "t10: i64 = X86ISD::WrapperRIP TargetGlobalTLSAddress:i64<i32* @x> 0 [TF=10]", it fails.

The patch is to fold the load and X86ISD::WrapperRIP.

Fixes PR26906

Patch by LuoYuanke

Reviewers: craig.topper, rnk, annita.zhang, wxiao3

Reviewed By: rnk

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58336

llvm-svn: 354756
2019-02-24 19:33:37 +00:00
Craig Topper 5532a98737 [X86][SSE] Use pblendw for v4i32/v2i64 during isel.
Summary:

Previously we used BLENDPS/BLENDPD but that puts the blend in the FP domain. Under optsize, the two address instruction pass can cause blendps/blendpd to commute to blendps/blendpd. But we probably shouldn't do that if the original type was a integer. So use pblendw instead.

Reviewers: spatel, RKSimon

Reviewed By: RKSimon

Subscribers: jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58574

llvm-svn: 354755
2019-02-24 19:23:41 +00:00
Craig Topper be3348573e [LegalizeTypes][AArch64][X86] Make type legalization of vector (S/U)ADD/SUB/MULO follow getSetCCResultType for the overflow bits. Make UnrollVectorOverflowOp properly convert from scalar boolean contents to vector boolean contents
Summary:
When promoting the over flow vector for these ops we should use the target's desired setcc result type. This way a v8i32 result type will use a v8i32 overflow vector instead of a v8i16 overflow vector. A v8i16 overflow vector will cause LegalizeDAG/LegalizeVectorOps to have to use v8i32 and truncate to v8i16 in its expansion. By doing this in type legalization instead, we get the truncate into the DAG earlier and give DAG combine more of a chance to optimize it.

We also have to fix unrolling to use the scalar setcc result type for the scalarized operation, and convert it to the required vector element type after the scalar operation. We have to observe the vector boolean contents when doing this conversion. The previous code was just taking the scalar result and putting it in the vector. But for X86 and AArch64 that would have only put a the boolean value in bit 0 of the element and left all other bits in the element 0. We need to ensure all bits in the element are the same. I'm using a select with constants here because that's what setcc unrolling in LegalizeVectorOps used.

Reviewers: spatel, RKSimon, nikic

Reviewed By: nikic

Subscribers: javed.absar, kristof.beyls, dmgreen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58567

llvm-svn: 354753
2019-02-24 19:23:36 +00:00
Sanjay Patel cb04ba032f [CGP] add special-cases to form unsigned add with overflow (PR40486)
There's likely a missed IR canonicalization for at least 1 of these
patterns. Otherwise, we wouldn't have needed the pattern-matching
enhancement in D57516.

Note that -- unlike usubo added with D57789 -- the TLI hook for
this transform defaults to 'on'. So if there's any perf fallout
from this, targets should look at how they're lowering the uaddo
node in SDAG and/or override that hook.

The x86 diffs suggest that there's some missing pattern-matching
for forming inc/dec.

This should fix the remaining known problems in:
https://bugs.llvm.org/show_bug.cgi?id=40486
https://bugs.llvm.org/show_bug.cgi?id=31754

llvm-svn: 354746
2019-02-24 15:31:27 +00:00
Craig Topper dc185522fb [TwoAddressInstructionPass] After commuting an instruction and before trying to look for more commutable operands, resample the number of operands.
The new instruciton might have less operands than the original instruction. If we don't resample, the next loop iteration might read an operand that doesn't exist.

X86 can commute blends to movss/movsd which reduces from 4 operands to 3. This happened in the test case that caused r354363 & company to be reverted. A reduced version of that has been committed here.

Really this whole checking for more commutable operands is a little fragile. It assumes that the new instructions operands are the same order and positions as the original except for the pair that was swapped. I don't know of anything that breaks this assumption today, but I've left a fixme. Fixing this will likely require an interface change.

llvm-svn: 354738
2019-02-23 21:41:44 +00:00
Craig Topper be9eeb5526 Recommit r354363 "[X86][SSE] Generalize X86ISD::BLENDI support to more value types"
And its follow ups r354511, r354640.

A follow patch will fix the issue that caused it to be reverted.

llvm-svn: 354737
2019-02-23 21:41:42 +00:00
Craig Topper ccc860cb81 Recommit r354647 and r354648 "[LegalizeTypes] When promoting the result of EXTRACT_SUBVECTOR, also check if the input needs to be promoted. Use that to determine the element type to extract"
r354648 was a follow up to fix a regression "[X86] Add a DAG combine for (aext_vector_inreg (aext_vector_inreg X)) -> (aext_vector_inreg X) to fix a regression from my previous commit."

These were reverted in r354713 as their context depended on other patches that were reverted for a bug.

llvm-svn: 354734
2019-02-23 19:51:32 +00:00
Simon Pilgrim e08f177ea2 [X86][AVX] concat_vectors(scalar_to_vector(x),scalar_to_vector(x)) --> broadcast(x)
For AVX1, limit this to i32/f32/i64/f64 loading cases only.

llvm-svn: 354730
2019-02-23 18:34:05 +00:00
Simon Pilgrim 31793733a0 [X86][AVX] Shuffle->Permute+Blend if we have one v4f64/v4i64 shuffle input in place
Even on AVX1 we can pretty cheaply (VPERM2F128+VSHUFPD) permute a single v4f64/v4i64 input (on AVX2 its just a single VPERMPD), followed by a BLENDPD.

llvm-svn: 354729
2019-02-23 17:10:47 +00:00
Craig Topper 75afc0105c [X86] Sign extend the 8-bit immediate when commuting blend instructions to match isel.
Conversion from ConstantSDNode to MachineInstr sign extends immediates from their APInt representation to int64_t.

This commit makes sure we do the same for commuting. The tests changes show how this improves CSE. This issue was made worse by the MachineCSE using commuteInstruction to undo a commute. So we virtually guarantee the sign extend from isel would be lost.

The improved CSE also occurred with r354363, but that was reverted. I'm working to undo the revert, but wanted to get this fix in while it was easy to see the results.

llvm-svn: 354724
2019-02-23 08:34:10 +00:00
Reid Kleckner e3876637cf Revert r354363 & co "[X86][SSE] Generalize X86ISD::BLENDI support to more value types"
r354363 caused https://crbug.com/934963#c1, which has a plain C reduced
test case.

I also had to revert some dependent changes:
- r354648
- r354647
- r354640
- r354511

llvm-svn: 354713
2019-02-23 01:19:42 +00:00
Craig Topper a9697f24cf [X86] Enable custom splitting of v8i64/v16i32 sext/zext for avx/avx2 when input type will be promoted by the type legalize to 128-bits.
If the the input type will be promoted to 128 bits its better to put a sign_extend_inreg/and in the 128 bit register before the split occurs. Otherwise we end up doing it on each half in the wider register.

Some of the overflow arithmetic tests are regressions, but I think we can make some improvement using getSetccResultType in DAG combine and/or type legalization.

llvm-svn: 354709
2019-02-23 00:35:02 +00:00
Craig Topper b95ca56361 [X86] Add a few test cases for a v8i64 sext/zext from an illegal type that needs to be promoted to 128 bits.
If v8i64 isn't a legal type but v4i64 is, these will be split and then each half will get their input promoted and become an any_extend_vector_inreg/punpckhwd + any_extend + and/sign_extend_inreg.

If we instead recognize the input will be promoted we can emit the and/sign_extend_inreg first in a 128 bit register. Then we can sign_extend/zero_extend one half and pshufd+sign_extend/zero_extend the other half.

llvm-svn: 354708
2019-02-23 00:34:58 +00:00
Sanjay Patel 973143ab79 [CGP] add tests for uaddo increment/decrement; NFC
llvm-svn: 354699
2019-02-22 23:19:34 +00:00
Guozhi Wei 4c8e480358 [MBP] Factor out function hasViableTopFallthrough and enhancement
This patch factor out the function hasViableTopFallthrough from rotateLoop. It is also enhanced. Original code checks only if there is a block can be placed before current loop top. This patch also checks if the loop top is the most possible successor of its predecessor. The attached test case shows its effect.

Differential Revision: https://reviews.llvm.org/D58393

llvm-svn: 354682
2019-02-22 18:04:37 +00:00
Nirav Dave 44037d7a63 [DAGCombine] Fold overlapping constant stores
Fold a smaller constant store into larger constant stores immediately
preceeding it.

Reviewers: rnk, courbet

Subscribers: javed.absar, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58468

llvm-svn: 354676
2019-02-22 16:00:19 +00:00
Sanjay Patel a9e289174a [x86] allow narrowing of vector UINT_TO_FP
As discussed in:
D56864
D58197

Always use the narrow (128-bit) instruction when possible.
We already had the signed int version of this transform.

llvm-svn: 354675
2019-02-22 15:47:45 +00:00
Craig Topper fa6187d230 [LegalizeVectorOps] Improve the placement of ANDs in the ExpandLoad path for non-byte-sized loads.
When we need to merge two adjacent loads the AND mask for the low piece was still sized for the full src element size. But we didn't have that many bits. The upper bits are already zero due to the SRL. So we can skip the AND if we're going to combine with the high bits.

We do need an AND to clear out any bits from the high part. We were anding the high part before combining with the low part, but it looks like ANDing after the OR gets better results.

So we can just emit the final AND after the optional concatentation is done. That will handling skipping before the OR and get rid of extra high bits after the OR.

llvm-svn: 354655
2019-02-22 07:03:25 +00:00
Craig Topper 0ca023b3b7 [X86] Add test cases to cover the path in VectorLegalizer::ExpandLoad for non-byte sized loads where bits from two loads need to be concatenated.
If the scalar type doesn't divide evenly into the WideVT then the code will need to take some bits from adjacent scalar loads and combine them.

But most of our testing is for i1 element type which always divides evenly.

llvm-svn: 354653
2019-02-22 06:18:32 +00:00
Craig Topper 3a391fc0e8 [X86] Add a DAG combine for (aext_vector_inreg (aext_vector_inreg X)) -> (aext_vector_inreg X) to fix a regression from my previous commit.
Type legalization is causing two nodes to be created here, but we can use a single node to extend from v8i16 to v2i64.

llvm-svn: 354648
2019-02-22 01:49:53 +00:00
Craig Topper be22f329a9 [LegalizeTypes] When promoting the result of EXTRACT_SUBVECTOR, also check if the input needs to be promoted. Use that to determine the element type to extract.
Otherwise we end up creating extract_vector_elts that then each need to have their input promoted. This can lead to truncates needing to be emitted for each of those.

But we already emitted any_extends when we legalized the extract_subvector. So now we have pairs of any_extend+trunc that partially cancel. But depending on how DAGCombiner visits them we can get weird results.

By promoting the input at the same time we can create only a single any_extend or truncate.

There's one regression in the vector-narrow-binop.ll case, but that looks easy to fix with a follow up patch.

llvm-svn: 354647
2019-02-22 01:49:50 +00:00
Craig Topper 427404c769 [X86] Fix some copy/paste mistakes that caused a VR128 to be used as the address of a load in an isel pattern
This was introduced in r354511.

Fixes PR40811.

llvm-svn: 354640
2019-02-22 00:04:35 +00:00
Craig Topper 2b34fdc67f [X86] Remove hasSideEffects=1 from the X87 pseudos with folded load.
This was done in r321424 to prevent scheduling from reordering things. But now that we model FPCW as a dependency, I don't think the same scheduling we were trying to prevent can occur.

llvm-svn: 354628
2019-02-21 22:00:15 +00:00
Sanjay Patel 234a5e8ea4 [x86] vectorize more cast ops in lowering to avoid register file transfers
This is a follow-up to D56864.

If we're extracting from a non-zero index before casting to FP,
then shuffle the vector and optionally narrow the vector before doing the cast:

cast (extelt V, C) --> extelt (cast (extract_subv (shuffle V, [C...]))), 0

This might be enough to close PR39974:
https://bugs.llvm.org/show_bug.cgi?id=39974

Differential Revision: https://reviews.llvm.org/D58197

llvm-svn: 354619
2019-02-21 20:40:39 +00:00
Sanjay Patel ba5ee817e9 [DAGCombiner] prevent infinite looping by truncating 'and' (PR40793)
This fold can occur during legalization, so it can fight with promotion
to the larger type. It apparently takes a special sequence and subtarget
to avoid more basic simplifications that would hide the problem.

But there's a bigger question raised here: why does distributeTruncateThroughAnd()
even exist? It duplicates functionality from a more minimal pattern that we
already have. But getting rid of this function requires some preliminary steps.

https://bugs.llvm.org/show_bug.cgi?id=40793

llvm-svn: 354594
2019-02-21 16:01:48 +00:00
Sanjay Patel d288687676 [x86] regenerate checks; NFC
llvm-svn: 354589
2019-02-21 15:30:28 +00:00
Nirav Dave dce91c1edb [X86] Fix copy-paste error in @ccz flag.
@ccz operand should be equivalent to @cce.

llvm-svn: 354588
2019-02-21 15:28:31 +00:00
Craig Topper 55cc7eb5cb [X86] Add test cases to show missed opportunities to remove AND mask from BTC/BTS/BTR instructions when LHS of AND has known zeros.
We can currently remove the mask if the immediate has all ones in the LSBs, but if the LHS of the AND is known zero, then the immediate might have had bits removed.

A similar issue also occurs with shifts and rotates. I'm preparing a common fix for all of them.

llvm-svn: 354520
2019-02-20 21:35:05 +00:00
Sanjay Patel 198cc305e9 [CGP] match a special-case of unsigned subtract overflow
This is the 'sub0' (negate) pattern from PR31754:
https://bugs.llvm.org/show_bug.cgi?id=31754

llvm-svn: 354519
2019-02-20 21:23:04 +00:00
Nirav Dave 48cf37b55c [DAGCombine] Generalize Dead Store to overlapping stores.
Summary:
Remove stores that are immediately overwritten by larger
stores.

Reviewers: courbet, rnk

Reviewed By: rnk

Subscribers: javed.absar, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58467

llvm-svn: 354518
2019-02-20 21:07:50 +00:00
Craig Topper 8d9c224a8c [SelectionDAG] Teach GetDemandedBits to look at the known zeros of the LHS when handling ISD::AND
If the LHS has known zeros, then the RHS immediate mask might have been simplified to remove those bits.

This patch adds a call to computeKnownBits to get the known zeroes to handle that possibility. I left an early out to skip the call if all of the demanded bits are set in the mask.

Differential Revision: https://reviews.llvm.org/D58464

llvm-svn: 354514
2019-02-20 20:52:26 +00:00
Nikita Popov c3b496de7a [SDAG] Support vector UMULO/SMULO
Second part of https://bugs.llvm.org/show_bug.cgi?id=40442.

This adds an extra UnrollVectorOverflowOp() method to SDAG, because
the general UnrollOverflowOp() method can't deal with multiple results.

Additionally we need to expand UMULO/SMULO during vector op
legalization, as it may result in unrolling, which may need additional
type legalization.

Differential Revision: https://reviews.llvm.org/D57997

llvm-svn: 354513
2019-02-20 20:41:44 +00:00
Craig Topper 31823fba2e [X86] Add more load folding patterns for blend instructions as a follow up to r354363.
This avoids depending on the peephole pass to do load folding.

Also adds some load folding for some insert_subvector patterns that use blend.

All of this was found by temporarily adding TB_NO_FORWARD to the blend immediate entries in the load folding tables.

I've added -disable-peephole to some of the affected tests from that experiment to ensure we're testing isel patterns.

llvm-svn: 354511
2019-02-20 20:18:20 +00:00
Nirav Dave c0fdd046c3 Fix testcase.
llvm-svn: 354508
2019-02-20 19:26:47 +00:00
Nirav Dave 1ce977f946 Add test case.
llvm-svn: 354504
2019-02-20 19:07:55 +00:00
Craig Topper b1b2fa35ed [X86] Add test case to show missed opportunity to remove an explicit AND on the bit position from BT when it has known zeros. NFC
If the bit position has known zeros in it, then the AND immediate will likely be optimized to remove bits.

This can prevent GetDemandedBits from recognizing that the AND is unnecessary.

llvm-svn: 354501
2019-02-20 19:02:01 +00:00
Craig Topper f4923db5a3 Revert r354498 "[X86] Add test case to show missed opportunity to remove an explicit AND on the bit position from BT when it has known zeros."
I accidentally committed more than just the test.

llvm-svn: 354499
2019-02-20 18:47:26 +00:00
Craig Topper f8498a615b [X86] Add test case to show missed opportunity to remove an explicit AND on the bit position from BT when it has known zeros.
If the bit position has known zeros in it, then the AND immediate will likely be optimized to remove bits.

This can prevent GetDemandedBits from recognizing that the AND is unnecessary.

llvm-svn: 354498
2019-02-20 18:45:38 +00:00
Sanjay Patel 8d91faa481 [CGP][x86] add tests for usubo special-case; NFC
This is another example from PR31754:
https://bugs.llvm.org/show_bug.cgi?id=31754

llvm-svn: 354475
2019-02-20 15:40:58 +00:00