Commit Graph

36147 Commits

Author SHA1 Message Date
Simon Pilgrim 08ba012973 [X86][AVX] Lower shuffles as repeated lane shuffles then lane-crossing shuffles
This patch attempts to represent a shuffle as a repeating shuffle (recognisable by is128BitLaneRepeatedShuffleMask) with the source input(s) in their original lanes, followed by a single permutation of the 128-bit lanes to their final destinations.

On AVX2 we can additionally attempt to match using 64-bit sub-lane permutation. AVX2 can also now match a similar 'broadcasted' repeating shuffle.

This patch has several benefits:

 * Avoids prematurely matching with lowerVectorShuffleByMerging128BitLanes which can require both inputs to have their input lanes permuted before shuffling.
 * Can replace PERMPS/PERMD instructions - although these are useful for cross-lane unary shuffling, they require their shuffle mask to be pre-loaded (and increase register pressure).
 * Matching the repeating shuffle makes use of a lot of existing shuffle lowering.

There is an outstanding minor AVX1 regression (combine_unneeded_subvector1 in vector-shuffle-combining.ll) of a previously 128-bit shuffle + subvector splat being converted to a subvector splat + (2 instruction) 256-bit shuffle, I intend to fix this in a followup patch for review.

Differential Revision: http://reviews.llvm.org/D16537

llvm-svn: 260834
2016-02-13 21:54:04 +00:00
Craig Topper f730a6bedc Remove Proc feature flags for X86 processors that are used to inherit features from one processor to another. This exposed extra features to the -mattr command line that we shouldn't. Replace with just inherited listconcats.
llvm-svn: 260832
2016-02-13 21:35:37 +00:00
Sanjay Patel e9bf993cee [x86-64] allow mfence even with -mno-sse (PR23203)
As shown in:
https://llvm.org/bugs/show_bug.cgi?id=23203
...we currently die because lowering believes that mfence is allowed without SSE2 on x86-64,
but the instruction def doesn't know that.

I don't know if allowing mfence without SSE is right, but if not, at least now it's consistently wrong. :)

Differential Revision: http://reviews.llvm.org/D17219

llvm-svn: 260828
2016-02-13 17:26:29 +00:00
Krzysztof Parzyszek 280a50ebb9 [Hexagon] Replace use of "std::map::emplace" with "insert"
Gcc 4.7.2-4 does not seem to have "emplace" in its implementation of map.
This should fix the build failure on polly-amd64-linux.

llvm-svn: 260816
2016-02-13 14:06:01 +00:00
NAKAMURA Takumi c2cc8706dd HexagonFrameLowering.cpp: Appease msc18 to give an explicit constructor SlotInfo() instead of member initializers.
llvm-svn: 260812
2016-02-13 07:29:49 +00:00
Matt Arsenault f2ddbf00ed AMDGPU: Prepare for reducing private element size.
Tests for the new scalarize all private access options will be
included with a future commit.

The only functional change is to make the split/scalarize behavior
for private access of > 4 element vectors to be consistent
with the flat/global handling. This makes the spilling worse
in the two changed tests.

llvm-svn: 260804
2016-02-13 04:18:53 +00:00
Tom Stellard 4409051d00 AMDGPU/SI: Add llvm.amdgcn.mov.dpp intrinsic
This intrinsic will be used to expose dpp functionality to higher-level
languages. It will map to the dpp version of v_mov_b32.

llvm-svn: 260792
2016-02-13 02:09:49 +00:00
Matt Arsenault d2759212b8 AMDGPU: Cleanup includes and random macros
llvm-svn: 260784
2016-02-13 01:24:08 +00:00
Matt Arsenault ce56a0ef54 AMDGPU: Add intrinsics for sin/cos
These provide direct access to the hardware instruction without
the unit version required like llvm.sin/llvm.cos lowering requires.

llvm-svn: 260782
2016-02-13 01:19:56 +00:00
Matt Arsenault 79963e80b8 AMDGPU: Rename intrinsic to better match instruction name
Also fixes missing f32 test.

llvm-svn: 260780
2016-02-13 01:03:00 +00:00
Tom Stellard 607051640c AMDGPU/SI: Add instruction defs for VOP1 DPP instructions
Reviewers: nhaustov, cfang, arsenm

Subscribers: arsenm, llvm-commits

Differential Revision: http://reviews.llvm.org/D17159

llvm-svn: 260774
2016-02-13 00:51:31 +00:00
Matt Arsenault 16f48d7c55 AMDGPU: Fix broken condition causing warning
llvm-svn: 260773
2016-02-13 00:36:10 +00:00
Alexey Samsonov 7217d27ee6 Fix Windows buildbot breakage.
llvm-svn: 260766
2016-02-12 23:51:06 +00:00
Tom Stellard bc4497b13c AMDGPU/SI: Detect uniform branches and emit s_cbranch instructions
Reviewers: arsenm

Subscribers: mareko, MatzeB, qcolombet, arsenm, llvm-commits

Differential Revision: http://reviews.llvm.org/D16603

llvm-svn: 260765
2016-02-12 23:45:29 +00:00
Yunzhong Gao 0de36ec169 Disable the vzeroupper insertion pass on PS4.
Differential Revision: http://reviews.llvm.org/D16837

llvm-svn: 260764
2016-02-12 23:37:57 +00:00
Derek Schuff 51699a83cd [WebAssembly] Report more meaningful error messages for some unsupported
ops.

Computed gotos and RETURNADDR may never be supported; we can do
FRAMEADDR in the future.

llvm-svn: 260759
2016-02-12 22:56:03 +00:00
Krzysztof Parzyszek 7793ddb043 [Hexagon] Optimize stack slot spills
Replace spills to memory with spills to registers, if possible. This
applies mostly to predicate registers (both scalar and vector), since
they are very limited in number. A spill of a predicate register may
happen even if there is a general-purpose register available. In cases
like this the stack spill/reload may be eliminated completely.

This optimization will consider all stack objects, regardless of where
they came from and try to match the live range of the stack slot with
a dead range of a register from an appropriate register class.

llvm-svn: 260758
2016-02-12 22:53:35 +00:00
Krzysztof Parzyszek abb17e5f41 [Hexagon] Mark HVX registers as volatile
llvm-svn: 260753
2016-02-12 22:26:44 +00:00
Derek Schuff 3114fc14e0 [WebAssembly] Update test expectations after r260737
llvm-svn: 260750
2016-02-12 22:05:08 +00:00
Krzysztof Parzyszek 79a886be06 [Hexagon] Recognize more cases in copyPhysReg and stack slot load/store
llvm-svn: 260748
2016-02-12 21:56:41 +00:00
Dan Gohman a6771b37f8 [WebAssembly] Fix byval for empty types.
llvm-svn: 260740
2016-02-12 21:30:18 +00:00
Chad Rosier 026f15e687 [AArch64] Enable post-RA MI scheduler for Kryo.
This should have landed in r260686.

llvm-svn: 260739
2016-02-12 21:27:33 +00:00
Dan Gohman a187ab2aeb [WebAssembly] Fix insertion of a BLOCK in a loop header that also ends a BLOCK.
llvm-svn: 260737
2016-02-12 21:19:25 +00:00
Krzysztof Parzyszek feb65a3f8b [Hexagon] Recognize more instructions in isLoadFromStackSlot/isStoreToStackSlot
llvm-svn: 260725
2016-02-12 20:54:15 +00:00
Krzysztof Parzyszek fd02aad8fd [Hexagon] Add utility functions to detect sign- and zero-extending loads
llvm-svn: 260698
2016-02-12 18:37:23 +00:00
Krzysztof Parzyszek 996ad1fa00 [Hexagon] Replace expansion of spill pseudo-instructions in frame lowering
Rewrite the code to handle all pseudo-instructions in a single pass.

This temporarily reverts spill slot optimization that used general-
purpose registers to hold values of spilled predicate registers.

llvm-svn: 260696
2016-02-12 18:19:53 +00:00
Tom Stellard 46937ca4e7 [AMDGPU] Assembler: Swap operands of flat_store instructions to match AMD assembler
Historically, AMD internal sp3 assembler has flat_store* addr, data
format. To match existing code and to enable reuse, change LLVM
definitions to match.  Also update MC and CodeGen tests.

Differential Revision: http://reviews.llvm.org/D16927

Patch by: Nikolay Haustov

llvm-svn: 260694
2016-02-12 17:57:54 +00:00
Changpeng Fang e07f1aa8fa AMDGPU/SI: Annotate Loops with Constant Condition in SIAnnotateControlFlow pass.
Summary:
  It is possible that the loop condition can be a boolean constant (infinite loop,
for example). So we sould handle constant condition in annotating a loop. This
patch adds this functionality to support annotating constant condition.

Reviewers: tstellarAMD, arsenm

Subscribers: llvm-commits, arsenm

Differential Revision: http://reviews.llvm.org/D15093

llvm-svn: 260692
2016-02-12 17:11:04 +00:00
Krzysztof Parzyszek 7ce3dbcb57 [Hexagon] Remove HexagonExpandPredSpillCode pass
This code is dead. The expansion is now done in HexagonFrameLowering.

llvm-svn: 260691
2016-02-12 17:09:58 +00:00
Krzysztof Parzyszek 7d5b4db7f9 [Hexagon] Eliminate pseudo instructions for circ/brev loads and stores
We can generate the actual instructions from the intrinsics without the
need for pseudo-instructions. Also, since the intrinsics have a side-
effect in a form of a store, attempt to optimize away loads from the
store location.

llvm-svn: 260690
2016-02-12 17:01:51 +00:00
Geoff Berry c25d3bd238 [AArch64] Reduce number of callee-save save/restores.
Summary:
Before this change, callee-save registers would be rounded up to even
pairs of GPRs and FPRs.  This change eliminates these extra padding
load/stores, though it does keep the stack allocation the same size
unless both the GPR and FPR sets have an odd size, in which case one
full pair stack slot (16 bytes) is saved.

This optimization cannot currently be done for MachO targets since they
rely on a fast-path .debug_frame equivalent that can only encode
callee-save registers as pairs.

Reviewers: t.p.northover, rengolin, mcrosier, jmolloy

Subscribers: aemerson, rengolin, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D17000

llvm-svn: 260689
2016-02-12 16:31:41 +00:00
Krzysztof Parzyszek bdb04d9032 [Hexagon] Handle out-of-range offsets in eliminateFrameIndex
Create a virtual register that will hold the actual address and use it
with the offset of 0 in the place of the original FI.

llvm-svn: 260688
2016-02-12 16:27:23 +00:00
Chad Rosier cd2be7f084 [AArch64] Add support for Qualcomm Kryo CPU.
Machine model description by Dave Estes <cestes@codeaurora.org>.

llvm-svn: 260686
2016-02-12 15:51:51 +00:00
Jun Bum Lim 397eb7b0b3 [AArch64] Merge two adjacent str WZR into str XZR
Summary:
This change merges adjacent 32 bit zero stores into a 64 bit zero store.
e.g.,
  str wzr, [x0]
  str wzr, [x0, #4]
becomes
  str xzr, [x0]

Therefore, four adjacent 32 bit zero stores will be a single stp.
e.g.,
  str wzr, [x0]
  str wzr, [x0, #4]
  str wzr, [x0, #8]
  str wzr, [x0, #12]
becomes
  stp xzr, xzr, [x0]

Reviewers: mcrosier, jmolloy, gberry, t.p.northover

Subscribers: aemerson, rengolin, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D16933

llvm-svn: 260682
2016-02-12 15:25:39 +00:00
Krzysztof Parzyszek e59964377c [Hexagon] Specify vector alignment in DataLayout string
The DataLayout can calculate alignment of vectors based on the alignment
of the element type and the number of elements. In fact, it is the product
of these two values. The problem is that for vectors of N x i1, this will
return the alignment of N bytes, since the alignment of i1 is 8 bits. The
vector types of vNi1 should be aligned to N bits instead. Provide explicit
alignment for HVX vectors to avoid such complications.

llvm-svn: 260678
2016-02-12 14:47:38 +00:00
Benjamin Kramer ac5e36f52e Fix uninitialized memory read.
Found by msan.

llvm-svn: 260676
2016-02-12 12:37:21 +00:00
Matt Arsenault 296b849163 AMDGPU: Set flat_scratch from flat_scratch_init reg
This was hardcoded to the static private size, but this
would be missing the offset and additional size for someday
when we have dynamic sizing.

Also stops always initializing flat_scratch even when unused.

In the future we should stop emitting this unless flat instructions
are used to access private memory. For example this will initialize
it almost always on VI because flat is used for global access.

llvm-svn: 260658
2016-02-12 06:31:30 +00:00
Mehdi Amini f71d653879 C API: Remove LLVMGetDataLayout that was deprecated in 3.7
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260657
2016-02-12 06:22:00 +00:00
Matt Arsenault 24ee0785dd AMDGPU: Set element_size in private resource descriptor
Introduce a subtarget feature for this, and leave the default with
the current behavior which assumes up to 16-byte loads/stores can
be used. The field also seems to have the ability to be set to 2 bytes,
but I'm not sure what that would be used for.

llvm-svn: 260651
2016-02-12 02:40:47 +00:00
Matt Arsenault 16f7bcb661 AMDGPU: Fix mishandling alignment when scalarizing vector loads/stores
I don't think this was causing any real problems, so I'm not sure
how to test for this.

llvm-svn: 260646
2016-02-12 02:22:21 +00:00
Matt Arsenault 55d49cfe2d AMDGPU: Initialize SILowerControlFlow
llvm-svn: 260645
2016-02-12 02:16:10 +00:00
Matt Arsenault 806dd0a532 AMDGPU: Remove trailing whitespace
llvm-svn: 260644
2016-02-12 02:16:07 +00:00
Sanjay Patel ac42fecf74 [x86] simplify getZeroVector() ; NFCI
Let DAG.getConstant() handle the splatting; there's no need
to repeat that logic here.

See also:
http://reviews.llvm.org/rL258833
http://reviews.llvm.org/rL260582

llvm-svn: 260609
2016-02-11 22:17:04 +00:00
Quentin Colombet 1cb8fac171 [AArch64] Implements the lowering of formal arguments for GlobalISel.
This is just a trivial implementation:
- Support only arguments passed in registers.
- Support only "plain" arguments, i.e., no sext/zext attribute.

At this point, it is possible to play with the IRTranslator on AArch64:
llc -mtriple arm64-<vendor>-<os> -print-machineinstrs <input.ll> -o - -global-isel

For now, we only support the translation of program with adds and returns.

Follow-up patches are on their way to add a test case (the MIRParser is
not ready as it is).

llvm-svn: 260600
2016-02-11 21:45:08 +00:00
Tom Stellard 1397d49ef5 AMDGPU/SI: Make sure MIMG descriptors and samplers stay in SGPRs
Summary:
It's possible to have resource descriptors and samplers stored in
VGPRs, either by a VMEM instruction or in the case of samplers,
floating-point calculations.  When this happens, we need to use
v_readfirstlane to copy these values back to sgprs.

Reviewers: mareko, arsenm

Subscribers: arsenm, llvm-commits

Differential Revision: http://reviews.llvm.org/D17102

llvm-svn: 260599
2016-02-11 21:45:07 +00:00
Tom Stellard fa8f204c5b AMDGPU/SI: When splitting SMRD instructions, add its users to VALU worklist
Summary:
When we split SMRD instructions into two MUBUFs we were adding the users
of the newly created MUBUFs to the VALU worklist.  However, the only
users these instructions had was the REG_SEQUENCE that was inserted
by splitSMRD when the original SMRD instruction was split.

We need to make sure to add the users of the original SMRD to the VALU
worklist before it is split.

I have a test case, but it requires one other bug fix, so it will be
added in a later commt.

Reviewers: mareko, arsenm

Subscribers: arsenm, llvm-commits

Differential Revision: http://reviews.llvm.org/D17101

llvm-svn: 260588
2016-02-11 21:14:34 +00:00
Derek Schuff 3f0632958b [WebAssembly] Reformat WebAssemblyFrameLowering and WebAssemblyISelLowering
Reviewers: sunfish, jfb

Subscribers: jfb, dschuff

Differential Revision: http://reviews.llvm.org/D17156

llvm-svn: 260585
2016-02-11 20:57:09 +00:00
Quentin Colombet 5cf7b415cc [AArch64] Trivial implementation of lower return for the IRTranslator.
llvm-svn: 260574
2016-02-11 19:45:27 +00:00
Kevin B. Smith 6a83350bee [X86] New pass to change byte and word instructions to zero-extending versions.
Differential Revision: http://reviews.llvm.org/D17032

llvm-svn: 260572
2016-02-11 19:43:04 +00:00
Quentin Colombet d96f49543d [AArch64] Plug the beginning of the GlobalISel pipeline.
llvm-svn: 260569
2016-02-11 19:35:06 +00:00