Commit Graph

10167 Commits

Author SHA1 Message Date
Craig Topper 56af2cad89 [X86] Simplify the slow-incdec test and add test cases with optsize.
I think we want to consider using inc/dec with optsize.

llvm-svn: 312804
2017-09-08 17:33:54 +00:00
Simon Pilgrim 2e4fb24173 [X86] Added PR31045 test case
Reduced version of 'addr-calc-crash.ll' that was included in D27044, that had been fixed already by D31286/rL298633

llvm-svn: 312786
2017-09-08 10:49:11 +00:00
Jatin Bhateja a251312719 [X86] Adding a test point for PR34149 'Suboptimal codegen for "fast" minnum and maxnum'
Differential Revision: https://reviews.llvm.org/D37614

llvm-svn: 312778
2017-09-08 09:15:36 +00:00
Chandler Carruth acbcf06f03 [x86] Flesh out the custom ISel for RMW aritmetic ops with used flags to
cover the bitwise operators.

Nothing really exciting here, this just stamps out the rest of the core
operations that can RMW memory and set flags.

Still not implemented here: ADC, SBB. Those will require more
interesting logic to channel the flags *in*, and I'm not currently
planning to try to tackle that. It might be interesting for someone who
wants to improve our code generation for bignum implementations.

Differential Revision: https://reviews.llvm.org/D37141

llvm-svn: 312768
2017-09-08 00:17:12 +00:00
Chandler Carruth 52a31bf268 [x86] Extend the manual ISel of `add` and `sub` with both RMW memory
operands and used flags to support matching immediate operands.

This is a bit trickier than register operands, and we still want to fall
back on a register operands even for things that appear to be
"immediates" when they won't actually select into the operation's
immediate operand. This also requires us to handle things like selecting
`sub` vs. `add` to minimize the number of bits needed to represent the
immediate, and picking the shortest immediate encoding. In order to
that, we in turn need to scan to make sure that CF isn't used as it will
get inverted.

The end result seems very nice though, and we're now generating
optimal instruction sequences for these patterns IMO.

A follow-up patch will further expand this to other operations with RMW
memory operands. But handing `add` and `sub` are useful starting points
to flesh out the machinery and make sure interesting and complex cases
can be handled.

Thanks to Craig Topper who provided a few fixes and improvements to this
patch in addition to the review!

Differential Revision: https://reviews.llvm.org/D37139

llvm-svn: 312764
2017-09-07 23:54:24 +00:00
Paul Robinson bb92137080 [DWARF] Line 0 should not have a discriminator.
It's meaningless and takes up extra space in the line table.

Differential Revision: https://reviews.llvm.org/D37364

llvm-svn: 312751
2017-09-07 22:15:44 +00:00
Michael Zuckerman 5a385940d3 [X86][LLVM]Expanding Supports lowerInterleavedLoad() in X86InterleavedAccess (VF{8|16|32} stride 3).
This patch expands the support of lowerInterleavedload to {8|16|32}x8i stride 3.

LLVM creates suboptimal shuffle code-gen for AVX2. In overall, this patch is a specific fix for the pattern (Strid=3 VF={8|16|32}) and we plan to include the store (deinterleved side).

The patch goal is to optimize the following sequence:
a0 b0 c0 a1 b1 c1 a2 b2
c2 a3 b3 c3 a4 b4 c4 a5
b5 c5 a6 b6 c6 a7 b7 c7

into

a0 a1 a2 a3 a4 a5 a6 a7
b0 b1 b2 b3 b4 b5 b6 b7
c0 c1 c2 c3 c4 c5 c6 c7

Reviewers
1. zvi
2. igor
3. guyblank
4. dorit
5. Ayal

llvm-svn: 312722
2017-09-07 14:02:13 +00:00
Florian Hahn d39b8a3533 [MachineCombiner] Update instruction depths incrementally for large BBs.
Summary:
For large basic blocks with lots of combinable instructions, the
MachineTraceMetrics computations in MachineCombiner can dominate the compile
time, as computing the trace information is quadratic in the number of
instructions in a BB and it's relevant successors/predecessors.

In most cases, knowing the instruction depth should be enough to make
combination decisions. As we already iterate over all instructions in a basic
block, the instruction depth can be computed incrementally. This reduces the
cost of machine-combine drastically in cases where lots of instructions
are combined. The major drawback is that AFAIK, computing the critical path
length cannot be done incrementally. Therefore we only compute
instruction depths incrementally, for basic blocks with more
instructions than inc_threshold. The -machine-combiner-inc-threshold
option can be used to set the threshold and allows for easier
experimenting and checking if using incremental updates for all basic
blocks has any impact on the performance.

Reviewers: sanjoy, Gerolf, MatzeB, efriedma, fhahn

Reviewed By: fhahn

Subscribers: kiranchandramohan, javed.absar, efriedma, llvm-commits

Differential Revision: https://reviews.llvm.org/D36619

llvm-svn: 312719
2017-09-07 12:49:39 +00:00
Alexander Ivchenko f3a3cd198e [x86] Update to cmov promotion tests for D36711; NFC
Adding i8 -> [i16, i32, i64] and i32 -> i64 cases.
This way we can see what the current codegen looks like.

llvm-svn: 312707
2017-09-07 08:59:05 +00:00
Zvi Rackover 25799d93f0 X86: Improve AVX512 fptoui lowering
Summary:
Add patterns for
  fptoui <16 x float> to <16 x i8>
  fptoui <16 x float> to <16 x i16>

Reviewers: igorb, delena, craig.topper

Reviewed By: craig.topper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37505

llvm-svn: 312704
2017-09-07 07:40:34 +00:00
Sanjay Patel e96f875deb [x86] fix triple and regenerate checks for psubus; NFC
Patch by Yulia Koval!

Differential Revision: https://reviews.llvm.org/D37523

llvm-svn: 312662
2017-09-06 19:05:20 +00:00
Wei Mi 818d50a93d [TailCall] Allow llvm.memcpy/memset/memmove to be tail calls when parent
function return the intrinsics's first argument.

llvm.memcpy/memset/memmove return void but they will return the first
argument after they are expanded as libcalls. Now if the parent function
has any return value, llvm.memcpy cannot be turned into tail call after
expansion.

The patch is to handle that case in SelectionDAGBuilder so when caller
function return the same value as the first argument of llvm.memcpy,
tail call is allowed.

Differential Revision: https://reviews.llvm.org/D37406

llvm-svn: 312641
2017-09-06 16:05:17 +00:00
Chandler Carruth 585bfc8443 [x86] Fix PR34377 by disabling cmov conversion when we relied on it
performing a zext of a register.

On the PR there is discussion of how to more effectively handle this,
but this patch prevents us from miscompiling code.

Differential Revision: https://reviews.llvm.org/D37504

llvm-svn: 312620
2017-09-06 06:28:08 +00:00
Zvi Rackover 5ebe94a84d X86 Tests: Tidy up AVX512 conversion tests. NFC.
Rename functions to a consistent format to make it easier to track coverage.

llvm-svn: 312619
2017-09-06 05:33:04 +00:00
Jatin Bhateja 80b5e38c4e Updating a test reference for rL312608.
Differential Revision: https://reviews.llvm.org/D37501

llvm-svn: 312614
2017-09-06 03:58:14 +00:00
Jatin Bhateja 2c139f77c7 [X86] Allow cross-lane permutations for sub targets supporting AVX2.
Summary:
Most instructions in AVX work “in-lane”, that is, each source element is applied only to other
elements of the same lane, thus a cross lane permutation is costly and needs more than one instrution.
AVX2 includes instructions to perform any-to-any permutation of words over a 256-bit register
and vectorized table lookup.

This should also Fix PR34369

Differential Revision: https://reviews.llvm.org/D37388

llvm-svn: 312608
2017-09-06 02:58:47 +00:00
Reid Kleckner e33c94f1b0 Add llvm.codeview.annotation to implement MSVC __annotation
Summary:
This intrinsic represents a label with a list of associated metadata
strings. It is modelled as reading and writing inaccessible memory so
that it won't be removed as dead code. I think the intention is that the
annotation strings should appear at most once in the debug info, so I
marked it noduplicate. We are allowed to inline code with annotations as
long as we strip the annotation, but that can be done later.

Reviewers: majnemer

Subscribers: eraman, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D36904

llvm-svn: 312569
2017-09-05 20:14:58 +00:00
Craig Topper 784fa8a4e3 [X86] Remove unnecessary (v4f32 (X86vzmovl (v4f32 (scalar_to_vector FR32X)))) patterns
We had already disabled the pattern for SSE4.1 and SSE4.2. But it got re-enabled for AVX and AVX512.

With SSE41 we rely on a separate (v4f32 (X86vzmovl VR128)) pattern to select blendps with a xorps to create zeroess. And a separate (v4f32 (scalar_to_vector FR32X)) to select a COPY_TO_REG_CLASS to move FR32 to VR128

The same thing can happen for AVX with vblendps and those separate patterns already exist.

For AVX512, (v4f32 (X86vzmov VR128)) will select a VMOVSS instruction instead of VBLENDPS due to their not being a EVEX VBLENDPS. This is what we were getting out of the larger pattern anyway. So the larger pattern is unneeded for AVX512 too.

For SSE1-SSSE3 we can rely on (v4f32 (X86vzmov VR128)) selecting a MOVSS similar to AVX512. Again this is what the larger pattern did too.

So the only real change here is that AVX1/2 now properly outputs a VBLENDPS during isel instead of a VMOVSS to match SSE41. Most tests didn't notice because the two address instruction pass knows how to turn VMOVSS into VBLENDPS to get an independent destination register.

llvm-svn: 312564
2017-09-05 19:09:02 +00:00
Zvi Rackover 2096893f34 X86 Tests: Adding missing AVX512 fptoui coverage tests. NFC.
Some of the cases show missing pattern i intend to fix shortly.

llvm-svn: 312560
2017-09-05 18:24:39 +00:00
Craig Topper 33caeadd90 [AVX512] Remove patterns for (v8f32 (X86vzmovl (insert_subvector undef, (v4f32 (scalar_to_vector FR32X:)), (iPTR 0)))) and the same for v4f64.
We don't have this same pattern for AVX2 so I don't believe we should have it for AVX512. We also didn't have it for v16f32.

llvm-svn: 312543
2017-09-05 17:33:58 +00:00
Simon Pilgrim 49f9ba37d8 [X86] Limit store merge size when implicitfloat is enabled (PR34421)
As suggested by @niravd : https://bugs.llvm.org/show_bug.cgi?id=34421#c2

Differential Revision: https://reviews.llvm.org/D37464

llvm-svn: 312534
2017-09-05 13:40:29 +00:00
Simon Pilgrim 8dbd745b09 [X86] Regenerate scalar rotation tests
llvm-svn: 312530
2017-09-05 12:28:30 +00:00
Simon Pilgrim 08246d185b [X86][AVX512] Use AVX512 attributes instead of -mcpu in vector shift tests
llvm-svn: 312529
2017-09-05 12:23:45 +00:00
Simon Pilgrim 3cbe005a69 [X86][AVX512] Use AVX512 attributes instead of -mcpu
llvm-svn: 312528
2017-09-05 12:05:14 +00:00
Sanjay Patel 8d7c8c7960 [x86] add tests for vector store merge opportunity; NFC
llvm-svn: 312504
2017-09-04 22:01:25 +00:00
Sanjay Patel 543f3fda83 [x86] auto-generate complete checks; NFC
llvm-svn: 312503
2017-09-04 21:46:05 +00:00
Sanjay Patel 4e10b61d8f [x86] add/regenerate complete checks; NFC
llvm-svn: 312502
2017-09-04 21:43:32 +00:00
Sanjay Patel d413303b83 [x86] add test for unnecessary cmp + masked store; NFC
As noted in PR11210:
https://bugs.llvm.org/show_bug.cgi?id=11210
...fixing this should allow us to eliminate x86-specific masked store intrinsics in IR.
(Although more testing will be needed to confirm that.)

llvm-svn: 312496
2017-09-04 17:21:17 +00:00
Sam McCall f71bb198ed Revert "Re-enable "[MachineCopyPropagation] Extend pass to do COPY source forwarding""
This crashes on boringSSL on PPC (will send reduced testcase)

This reverts commit r312328.

llvm-svn: 312490
2017-09-04 15:47:00 +00:00
Simon Pilgrim 91751b42f6 [X86][AVX512] Add support for VPERMILPS v16f32 shuffle lowering (PR34382)
Avoid use of VPERMPS where we don't need it by instead using the variable mask version of VPERMILPS for unary shuffles.

llvm-svn: 312486
2017-09-04 13:51:57 +00:00
Simon Pilgrim adffa8b2e9 Added shuffle test case from PR34382
llvm-svn: 312485
2017-09-04 13:43:13 +00:00
Simon Pilgrim 62c78f27d4 Added shuffle test case from PR34369
llvm-svn: 312481
2017-09-04 11:08:47 +00:00
Ayman Musa 5defce3986 [X86] Replace -mcpu option with -mattr in LIT tests added in https://reviews.llvm.org/rL312442
llvm-svn: 312474
2017-09-04 09:31:32 +00:00
Igor Breger 2661ae48c7 [GlobalISel][X86] G_PHI support.
llvm-svn: 312473
2017-09-04 09:06:45 +00:00
Dean Michael Berris ebc1659016 [XRay][CodeGen] Use PIC-friendly code in XRay sleds and remove synthetic references in .text
Summary:
This is a re-roll of D36615 which uses PLT relocations in the back-end
to the call to __xray_CustomEvent() when building in -fPIC and
-fxray-instrument mode.

Reviewers: pcc, djasper, bkramer

Subscribers: sdardis, javed.absar, llvm-commits

Differential Revision: https://reviews.llvm.org/D37373

llvm-svn: 312466
2017-09-04 05:34:58 +00:00
Craig Topper 76f44015e7 [X86] Add a combine to recognize when we have two insert subvectors that together write the whole vector, but the starting vector isn't undef.
In this case we should replace the starting vector with undef.

llvm-svn: 312462
2017-09-04 01:13:36 +00:00
Craig Topper bc13af84f2 [X86] Add a combine to turn (insert_subvector zero, (insert_subvector zero, X, Idx), Idx) into an insert of X into the larger zero vector.
llvm-svn: 312460
2017-09-03 22:25:52 +00:00
Craig Topper fcf6bc5503 [X86] Add more patterns to use moves to zero the upper portions of a vector register that I missed in r312450.
llvm-svn: 312459
2017-09-03 22:25:50 +00:00
Craig Topper 788fbe08db [X86] Combine inserting a vector of zeros into a vector of zeros just the larger vector.
llvm-svn: 312458
2017-09-03 22:25:49 +00:00
Craig Topper 8ee36ffb54 [X86] Add patterns to turn an insert into lower subvector of a zero vector into a move instruction which will implicitly zero the upper elements.
Ideally we'd be able to emit the SUBREG_TO_REG without the explicit register->register move, but we'd need to be sure the producing operation would select something that guaranteed the upper bits were already zeroed.

llvm-svn: 312450
2017-09-03 17:52:25 +00:00
Craig Topper fa82efb50a [X86] Add VBLENDPS/VPBLENDD to the execution domain fixing tables.
llvm-svn: 312449
2017-09-03 17:52:23 +00:00
Craig Topper bb6506d251 [X86] Canonicalize (concat_vectors X, zero) -> (insert_subvector zero, X, 0).
In a future patch, I plan to teach isel to use a small vector move with implicit zeroing of the upper elements when it sees the (insert_subvector zero, X, 0) pattern.

llvm-svn: 312448
2017-09-03 17:52:19 +00:00
Ayman Musa 2927ea0b19 [X86] Add -mtriple option to LIT tests added in https://reviews.llvm.org/rL312442
llvm-svn: 312443
2017-09-03 15:06:26 +00:00
Ayman Musa ef8f61bce6 [X86][AVX512] Add simple tests for all AVX512 shuffle instructions.
Throughout an effort to strongly check the behavior of CodeGen with the IR shufflevector instruction we generated many tests while predicting the best X86 sequence that may be generated.

This is a subset of the generated tests that we think may add value to our X86 set of tests.

Some of the checks are not optimal and will be changed after fixing:
1. PR34394
2. PR34382
3. PR34380
4. PR34359

Differential Revision: https://reviews.llvm.org/D37329

llvm-svn: 312442
2017-09-03 13:53:44 +00:00
Ayman Musa ac12849d32 [X86] Add RUN line for LIT test committed in "rL312438: [X86] Fix crash on assert of non-simple type after type-legalization.".
llvm-svn: 312439
2017-09-03 10:44:18 +00:00
Ayman Musa 44cde94935 [X86] Fix crash on assert of non-simple type after type-legalization
The function combineShuffleToVectorExtend in DAGCombine might generate an illegal typed node after "legalize types" phase, causing assertion on non-simple type to fail afterwards.

Adding a type check in case the combine is running after the type legalize pass.

Differential Revision: https://reviews.llvm.org/D37330

llvm-svn: 312438
2017-09-03 09:09:16 +00:00
Craig Topper 619b759a57 [X86] Teach fastisel to handle zext/sext i8->i16 and sext i1->i8/i16/i32/i64
Summary:
ZExt and SExt from i8 to i16 aren't implemented in the autogenerated fast isel table because normal isel does a zext/sext to 32-bits and a subreg extract to avoid a partial register write or false dependency on the upper bits of the destination. This means without handling in fast isel we end up triggering a fast isel abort.

We had no custom sign extend handling at all so while I was there I went ahead and implemented sext i1->i8/i16/i32/i64 which was also missing. This generates an i1->i8 sign extend using a mask with 1, then an 8-bit negate, then continues with a sext from i8. A better sequence would be a wider and/negate, but would require more custom code.

Fast isel tests are a mess and I couldn't find a good home for the tests so I created a new one.

The test pr34381.ll had to have fast-isel removed because it was relying on a fast isel abort to hit the bug. The test case still seems valid with fast-isel disabled though some of the instructions changed.

Reviewers: spatel, zvi, igorb, guyblank, RKSimon

Reviewed By: guyblank

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D37320

llvm-svn: 312422
2017-09-02 18:53:46 +00:00
Sanjay Patel f4425e9a66 [x86] eliminate redundant shuffle of horizontal math ops when both inputs are the same
This is limited to a set of patterns based on the example in PR34111:
https://bugs.llvm.org/show_bug.cgi?id=34111
...but as I was investigating this, I see that horizontal patterns can go wrong in many, 
many other ways that would not be handled by this patch. Each data type may even go 
different in the DAG after starting with the same basic IR pattern, so even proper IR 
canonicalization won't fix it all.

Differential Revision: https://reviews.llvm.org/D37357

llvm-svn: 312379
2017-09-01 21:09:04 +00:00
Craig Topper 2a75b6f26b [X86] Add test case I forgot to commit with r312285.
llvm-svn: 312335
2017-09-01 16:40:24 +00:00
Geoff Berry 65528f2991 Re-enable "[MachineCopyPropagation] Extend pass to do COPY source forwarding"
Issues addressed since original review:
- Moved removal of dead instructions found by
  LiveIntervals::shrinkToUses() outside of loop iterating over
  instructions to avoid instructions being deleted while pointed to by
  iterator.
- Fixed ARMLoadStoreOptimizer bug exposed by this change in r311907.
- The pass no longer forwards COPYs to physical register uses, since
  doing so can break code that implicitly relies on the physical
  register number of the use.
- The pass no longer forwards COPYs to undef uses, since doing so
  can break the machine verifier by creating LiveRanges that don't
  end on a use (since the undef operand is not considered a use).

  [MachineCopyPropagation] Extend pass to do COPY source forwarding

  This change extends MachineCopyPropagation to do COPY source forwarding.

  This change also extends the MachineCopyPropagation pass to be able to
  be run during register allocation, after physical registers have been
  assigned, but before the virtual registers have been re-written, which
  allows it to remove virtual register COPY LiveIntervals that become dead
  through the forwarding of all of their uses.

llvm-svn: 312328
2017-09-01 14:27:20 +00:00