Commit Graph

21959 Commits

Author SHA1 Message Date
Stefan Pintilie 8f0c783095 [PowerPC] Try to simplify a Swap if it feeds a Splat
If we have the situation where a Swap feeds a Splat we can sometimes change the
  index on the Splat and then remove the Swap instruction.

Fixed the test case that was failing and recommit after pulling the original
  commit.

  Original revision is here: https://reviews.llvm.org/D39009

llvm-svn: 316478
2017-10-24 17:44:27 +00:00
Simon Pilgrim 5e8c3f328f [X86][AVX] ComputeNumSignBitsForTargetNode - add support for X86ISD::VTRUNC
llvm-svn: 316462
2017-10-24 17:04:57 +00:00
Simon Pilgrim 1bc62f03a5 [SelectionDAG] Add VSELECT support to ComputeNumSignBits
llvm-svn: 316457
2017-10-24 16:38:38 +00:00
Simon Pilgrim 0a12c239b6 [X86] truncateVectorCompareWithPACKSS - use PACKSSDW/PACKSSWB instead of just PACKSSWB.
By using the widest type possible for PACKSS truncation we have a better chance of being able to peek through bitcasts and improves other combines driven by ComputeNumSignBits.

llvm-svn: 316448
2017-10-24 15:38:16 +00:00
Sanjay Patel f762c7b32f [x86] add more vector ISA variants for memcmp expansion; NFC
...because every swiss cheese has different holes.

llvm-svn: 316446
2017-10-24 15:27:47 +00:00
Andrew V. Tischenko f4fbe4a51b Update f16c instruction scheduling on btver2.
Differential Revision: https://reviews.llvm.org/D39051

llvm-svn: 316435
2017-10-24 13:38:30 +00:00
Zvi Rackover 31b101a186 X86CallFrameOptimization: Recognize 'store 0/-1 using and/or' idioms
Summary:
r264440 added or/and patterns for storing -1 or 0 with the intention of decreasing code size. However,
X86CallFrameOptimization does not recognize these memory accesses so it will not replace them with push's when profitable.

This patch fixes this problem by teaching X86CallFrameOptimization these store 0/-1 idioms.

An alternative fix would be to prevent the 'store 0/1 idioms' patterns from firing when accessing the stack. This would save
the need to teach the pass about these idioms. However, because X86CallFrameOptimization does not always fire we may result
in cases where neither X86CallFrameOptimization not the patterns for 'store 0/1 idioms' fire.

Fixes pr34863

Reviewers: DavidKreitzer, guyblank, aymanmus

Reviewed By: aymanmus

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38738

llvm-svn: 316431
2017-10-24 12:13:05 +00:00
Marek Olsak ce76ea0394 AMDGPU: Add new intrinsic llvm.amdgcn.kill(i1)
Summary:
Kill the thread if operand 0 == false.
llvm.amdgcn.wqm.vote can be applied to the operand.

Also allow kill in all shader stages.

Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D38544

llvm-svn: 316427
2017-10-24 10:27:13 +00:00
Marek Olsak 2114fc3bcb AMDGPU: Add llvm.amdgcn.wqm.vote intrinsic
Reviewers: arsenm, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye

Differential Revision: https://reviews.llvm.org/D38543

llvm-svn: 316426
2017-10-24 10:26:59 +00:00
Zvi Rackover 3c0d385598 X86: Fix X86CallFrameOptimization to search for the COPY StackPointer
SelectionDAG inserts a copy of ESP into a virtual register.
X86CallFrameOptimization assumed that the COPY, if present, is always
right after the call-frame setup instruction (ADJCALLSTACKDOWN). This was a
wrong assumption as the COPY can be located anywhere between the call-frame setup
instruction and its first use. If the COPY happened to be located in a different
location than what X86CallFrameOptimization assumed, visiting it while
processing the call chain would lead to a conservative bail-out.

The fix is quite straightfoward, scan ahead for the stack-pointer copy and make note
of it so it can be ignored while processing the call chain.

Fixes pr34903

Differential Revision: https://reviews.llvm.org/D38730

llvm-svn: 316416
2017-10-24 07:38:29 +00:00
Omer Paparo Bivas 2251c79aba [MC] Adding code padding for performance stability - infrastructure. NFC.
Infrastructure designed for padding code with nop instructions in key places such that preformance improvement will be achieved.
The infrastructure is implemented such that the padding is done in the Assembler after the layout is done and all IPs and alignments are known.
This patch by itself in a NFC. Future patches will make use of this infrastructure to implement required policies for code padding.

Reviewers:
aaboud
zvi
craig.topper
gadi.haber

Differential revision: https://reviews.llvm.org/D34393

Change-Id: I92110d0c0a757080a8405636914a93ef6f8ad00e
llvm-svn: 316413
2017-10-24 06:16:03 +00:00
Zvi Rackover c6d0b6c103 X86: Register the X86CallFrameOptimization pass
Summary:
The motivation of this change is to enable .mir testing for this pass.
Added one test case to cover the functionality, this same case will be improved by
a future patch.

Reviewers: igorb, guyblank, DavidKreitzer

Reviewed By: guyblank, DavidKreitzer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38729

llvm-svn: 316412
2017-10-24 05:47:07 +00:00
Jessica Paquette 9df7fde269 [MachineOutliner] Add optimisation remarks for successful outlining
This commit adds optimisation remarks for outlining which fire when a function
is successfully outlined.

To do this, OutlinedFunctions must now contain references to their Candidates.
Since the Candidates must still be sorted and worked on separately, this is
done by working on everything in terms of shared_ptrs to Candidates. This is
good; it means that we can easily move everything to outlining in terms of
the OutlinedFunctions rather than the individual Candidates. This is far more
intuitive than what's currently there!

(Remarks are output when a function is created for some group of Candidates.
In a later commit, all of the outlining logic should be rewritten so that we
loop over OutlinedFunctions rather than over Candidates.)
 

llvm-svn: 316396
2017-10-23 23:36:46 +00:00
Aditya Nandakumar 921f24cef1 [GISel][ARM]: Fix illegal Generic copies in tests
This is in preparation for a verifier check that makes sure
copies are of the same size (when generic virtual registers are involved).

llvm-svn: 316388
2017-10-23 22:53:08 +00:00
Aditya Nandakumar 4dfd2590dc [GISel][AArch64]: Fix illegal Generic copies in tests
This is in preparation for a verifier check that makes sure copies are
of the same size (when generic virtual registers are involved).

llvm-svn: 316387
2017-10-23 22:53:04 +00:00
Simon Pilgrim 321e54f72d [X86][SSE] combineBitcastvxi1 - use PACKSSWB directly to pack v8i16 to v16i8
Avoid difficulties determining the number of sign bits later on in shuffle lowering to lower to PACKSS

llvm-svn: 316383
2017-10-23 22:05:02 +00:00
George Burgess IV 8a0e4bc972 Don't crash when we see unallocatable registers in clobbers
This fixes a bug where we'd crash given code like the test-case from
https://bugs.llvm.org/show_bug.cgi?id=30792 . Instead, we let the
offending clobber silently slide through.

This doesn't fully fix said bug, since the assembler will still complain
the moment it sees a crypto/fp/vector op, and we still don't diagnose
calls that require vector regs.

Differential Revision: https://reviews.llvm.org/D39030

llvm-svn: 316374
2017-10-23 20:46:36 +00:00
Stefan Pintilie 52bbd587ac Revert "[PowerPC] Try to simplify a Swap if it feeds a Splat"
Revert commit r316366.
Previous commit causes p8-scalar_vector_conversions.ll to fail.

This reverts commit 990e764ad8a2eec206ce5dda6aefab059ccd4e92.

llvm-svn: 316371
2017-10-23 20:22:23 +00:00
Krzysztof Parzyszek 6f06b6edff [Hexagon] Return the correct chain edge for i1 function calls
In HexagonISelLowering, there is code to handle the case when
a function returns an i1 type. In this case, we need to generate
extra nodes to copy the result from R0 to a predicate register.

The code was returning the wrong value for the chain edge which
caused an assert "Wrong topological sorting" when converting the
instructions to MIs.

This patch fixes the problem by returning the chain for the final
copy.

Patch by Brendon Cahoon.

llvm-svn: 316367
2017-10-23 19:35:25 +00:00
Stefan Pintilie feafa1d7f0 [PowerPC] Try to simplify a Swap if it feeds a Splat
If we have the situation where a Swap feeds a Splat we can sometimes change the
index on the Splat and then remove the Swap instruction.

Differential Revision: https://reviews.llvm.org/D39009

llvm-svn: 316366
2017-10-23 19:33:31 +00:00
Krzysztof Parzyszek 273678823b [Hexagon] Add extra pattern for S4_addaddi
One combination was missing: add(add(x,y),c).

llvm-svn: 316363
2017-10-23 19:07:50 +00:00
Daniel Sanders d66e0901ae [globalisel][tablegen] Import stores and allow GISel to automatically substitute zero regs like WZR/XZR/$zero.
This patch enables the import of stores. Unfortunately, doing so by itself,
loses an optimization where storing 0 to memory makes use of WZR/XZR.

To mitigate this, this patch also introduces a new feature that allows register
operands to nominate a zero register. When this is done, GlobalISel will
substitute (G_CONSTANT 0) with the nominated register automatically. This
is currently configured to only apply to the stores.

Applying it to GPR32/GPR64 register classes in general will be done after
review see (https://reviews.llvm.org/D39150).

llvm-svn: 316360
2017-10-23 18:19:24 +00:00
Simon Pilgrim 6bda996cab [X86][SSE] Regenerate PACKSS tests on 32 + 64-bit targets
llvm-svn: 316354
2017-10-23 17:50:40 +00:00
Matt Arsenault b791802aef AMDGPU: Fix default range in non-kernel functions
The range should be assumed to be the hardware maximum
if a workitem intrinsic is used in a callable function
which does not know the restricted limit of the calling
kernel.

llvm-svn: 316346
2017-10-23 17:09:35 +00:00
Andrew V. Tischenko 777308b548 Update DPPD/DPPS instruction scheduling on btver2.
Differential Revision: https://reviews.llvm.org/D39046

llvm-svn: 316334
2017-10-23 15:53:30 +00:00
Simon Pilgrim 32da2f9245 [DAGCombine] Permit combining of shuffles of equivalent splat BUILD_VECTORs
combineShuffleOfScalars is very conservative about shuffled BUILD_VECTORs that can be combined together.

This patch adds one additional case - if both BUILD_VECTORs represent splats of the same scalar value but with different UNDEF elements, then we should create a single splat BUILD_VECTOR, sharing only the UNDEF elements defined by the shuffle mask.

Differential Revision: https://reviews.llvm.org/D38696

llvm-svn: 316331
2017-10-23 15:48:08 +00:00
Simon Pilgrim 03c8753924 [X86][SSE] Regenerate bitcast-and-setcc tests
Avoid the retl/retq changes in an upcoming patch

llvm-svn: 316328
2017-10-23 14:47:49 +00:00
Simon Pilgrim e131cb0bd5 [X86][AVX2] Regenerate AVX2 intrinsics tests on 32 + 64-bit targets
llvm-svn: 316326
2017-10-23 14:19:46 +00:00
Simon Pilgrim c680c4742b [X86][AVX] Regenerate AVX intrinsics tests on 32 + 64-bit targets
llvm-svn: 316325
2017-10-23 14:17:59 +00:00
Simon Pilgrim eae6e9dbc5 [X86][F16C] Regenerate F16C schedule tests
llvm-svn: 316324
2017-10-23 14:15:24 +00:00
Ayman Musa 4b2bd5ff5e [X86] Add test for opportunity to use bzhi X86 instruction instead of load+and instructions.
Transformation uploaded for CR in https://reviews.llvm.org/D34141.

llvm-svn: 316320
2017-10-23 10:24:19 +00:00
Marina Yatsina f9371d821f Add logic to greedy reg alloc to avoid bad eviction chains
This fixes bugzilla 26810
https://bugs.llvm.org/show_bug.cgi?id=26810

This is intended to prevent sequences like:
movl %ebp, 8(%esp) # 4-byte Spill
movl %ecx, %ebp
movl %ebx, %ecx
movl %edi, %ebx
movl %edx, %edi
cltd
idivl %esi
movl %edi, %edx
movl %ebx, %edi
movl %ecx, %ebx
movl %ebp, %ecx
movl 16(%esp), %ebp # 4 - byte Reload

Such sequences are created in 2 scenarios:

Scenario #1:
vreg0 is evicted from physreg0 by vreg1
Evictee vreg0 is intended for region splitting with split candidate physreg0 (the reg vreg0 was evicted from)
Region splitting creates a local interval because of interference with the evictor vreg1 (normally region spliiting creates 2 interval, the "by reg" and "by stack" intervals. Local interval created when interference occurs.)
one of the split intervals ends up evicting vreg2 from physreg1
Evictee vreg2 is intended for region splitting with split candidate physreg1
one of the split intervals ends up evicting vreg3 from physreg2 etc.. until someone spills

Scenario #2
vreg0 is evicted from physreg0 by vreg1
vreg2 is evicted from physreg2 by vreg3 etc
Evictee vreg0 is intended for region splitting with split candidate physreg1
Region splitting creates a local interval because of interference with the evictor vreg1
one of the split intervals ends up evicting back original evictor vreg1 from physreg0 (the reg vreg0 was evicted from)
Another evictee vreg2 is intended for region splitting with split candidate physreg1
one of the split intervals ends up evicting vreg3 from physreg2 etc.. until someone spills

As compile time was a concern, I've added a flag to control weather we do cost calculations for local intervals we expect to be created (it's on by default for X86 target, off for the rest).

Differential Revision: https://reviews.llvm.org/D35816

Change-Id: Id9411ff7bbb845463d289ba2ae97737a1ee7cc39
llvm-svn: 316295
2017-10-22 17:59:38 +00:00
Momchil Velikov d6a4ab3d49 [ARM] Dynamic stack alignment for 16-bit Thumb
This patch implements dynamic stack (re-)alignment for 16-bit Thumb. When
targeting processors, which support only the 16-bit Thumb instruction set
the compiler ignores the alignment attributes of automatic variables and may
silently generate incorrect code.

Differential revision: https://reviews.llvm.org/D38143

llvm-svn: 316289
2017-10-22 11:56:35 +00:00
Guy Blank 92d5ce3bd4 [X86] Add a pass to convert instruction chains between domains.
The pass scans the function to find instruction chains that define
registers in the same domain (closures).
It then calculates the cost of converting the closure to another domain.
If found profitable, the instructions are converted to instructions in
the other domain and the register classes are changed accordingly.

This commit adds the pass infrastructure and a simple conversion from
the GPR domain to the Mask domain.

Differential Revision:
https://reviews.llvm.org/D37251

Change-Id: Ic2cf1d76598110401168326d411128ae2580a604
llvm-svn: 316288
2017-10-22 11:43:08 +00:00
Aaron Ballman fc02869c96 Reverting r316270 due to failing build bots.
http://lab.llvm.org:8011/builders/clang-x86_64-linux-selfhost-modules-2/builds/12899
http://lab.llvm.org:8011/builders/clang-x86-windows-msvc2015/builds/7951

llvm-svn: 316276
2017-10-21 20:38:15 +00:00
Simon Pilgrim 3cb024490a [X86][SSE] Add extractps/pextrd equivalence to domain tables
Differential Revision: https://reviews.llvm.org/D39135

llvm-svn: 316274
2017-10-21 20:19:48 +00:00
Fangrui Song c7b749bd06 [PPC CodeGen] Fix the bitreverse.i64 intrinsic.
Summary: The two 32-bit words were swapped.

Subscribers: nemanjai, kbarton

Differential Revision: https://reviews.llvm.org/D38705

llvm-svn: 316270
2017-10-21 16:59:40 +00:00
Simon Pilgrim 7025b07828 [X86][SSE] Add missing extractps scheduling test
llvm-svn: 316262
2017-10-21 14:35:09 +00:00
Craig Topper fcf27188d7 [X86] Do not generate __multi3 for mul i128 on X86
Summary: __multi3 is not available on x86 (32-bit). Setting lib call name for MULI_128 to nullptr forces DAGTypeLegalizer::ExpandIntRes_MUL to generate instructions for 128-bit multiply instead of a call to an undefined function.  This fixes PR20871 though it may be worth looking at why licm and indvars combine to generate 65-bit multiplies in that test.

Patch by Riyaz V Puthiyapurayil

Reviewers: craig.topper, schweitz

Reviewed By: craig.topper, schweitz

Subscribers: RKSimon, llvm-commits

Differential Revision: https://reviews.llvm.org/D38668

llvm-svn: 316254
2017-10-21 02:26:00 +00:00
Krzysztof Parzyszek 9d19c8cac9 [Packetizer] Add function to check for aliasing between instructions
llvm-svn: 316243
2017-10-20 22:08:40 +00:00
Krzysztof Parzyszek 022922b31a [Hexagon] Report error instead of crashing on wrong inline-asm constraints
llvm-svn: 316236
2017-10-20 20:24:44 +00:00
Krzysztof Parzyszek 64e5d7d3ae [Hexagon] Reorganize and update instruction patterns
llvm-svn: 316228
2017-10-20 19:33:12 +00:00
Simon Pilgrim 1311ff1340 [X86][SSE] Add missing _mm_extract_ps fast-isel test
llvm-svn: 316226
2017-10-20 19:29:01 +00:00
Sanjay Patel bb94161fb7 [x86] avoid FileCheck assert duplication with retl/retq regex; NFC
This was suggested in PR35003:
https://bugs.llvm.org/show_bug.cgi?id=35003

32-bit checks may be identical to 64-bit (if we avoid those pesky scalar params!).

I'll check in the script change shortly assuming this doesn't anger any bots.

llvm-svn: 316223
2017-10-20 18:35:32 +00:00
Dave Lee f9b72327b0 Make x86 __ehhandler comdat if parent function is
Summary:
This change comes from using lld for i686-windows-msvc. Before this change, lld
emits an error of:

    error: relocation against symbol in discarded section: .xdata

It's possible that this could be addressed in lld, but I think this change is
reasonable on its own.

At a high level, this is being generated:

    A (.text comdat) -> B (.text) -> C (.xdata comdat)

Where A is a C++ inline function, which references B, an exception handler
thunk, which references C, the exception handling info.

With this structure, lld will error when applying relocations to B if the C it
references has been discarded (some other C has been selected).

This change checks if A is comdat, and if so places the exception registration
thunk (B) in the comdata group of A (and B).

It appears that MSVC makes the __ehhandler function comdat.

Is it possible that duplicate thunks are being emitted into the final binary
with other linkers, or are they stripping the unused thunks?

Reviewers: rnk, majnemer, compnerd, smeenai

Reviewed By: rnk, compnerd

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38940

llvm-svn: 316219
2017-10-20 17:04:43 +00:00
Krzysztof Parzyszek 3818aeaeb9 [Hexagon] Allow redefinition with immediates for hw loop conversion
Normally, if the registers holding the induction variable's bounds
are redefined inside of the loop's body, the loop cannot be converted
to a hardware loop. However, if the redefining instruction is actually
loading an immediate value into the register, this conversion is both
possible and legal (since the immediate itself will be used in the
loop setup in the preheader).

llvm-svn: 316218
2017-10-20 16:56:33 +00:00
Simon Pilgrim b6b617b7d8 [X86] Check all CPU target names.
We ignore the 32-bit/64-bit triple but I've tried to use i686 triples for CPUs that don't support x86_64

llvm-svn: 316217
2017-10-20 16:55:51 +00:00
Zvi Rackover e95709d54a X86 Tests: Add tests for vector permutes with variable indices. NFC.
Basic tests which are the equivalent of single-source shufflevector with variable mask.

llvm-svn: 316216
2017-10-20 15:32:14 +00:00
Aleksandar Beserminji 143572984d Revert "[mips] Reordering callseq* nodes to be linear"
This reverts commit r314507, because the original patch is causing test
failures.

llvm-svn: 316215
2017-10-20 14:35:41 +00:00
Eugene Leviant 27b226fb65 [ARM] Use post-RA MI scheduler when +use-misched is set
Differential revision: https://reviews.llvm.org/D39100

llvm-svn: 316214
2017-10-20 14:29:17 +00:00