Commit Graph

43101 Commits

Author SHA1 Message Date
Nirav Dave 4dcad5dc6b Add DAG argument to canMergeStoresTo NFC.
llvm-svn: 307583
2017-07-10 20:25:54 +00:00
Krzysztof Parzyszek f85dd9f4e5 [Hexagon] Convert typed ISD opcodes to generic ones, NFC
llvm-svn: 307582
2017-07-10 20:16:44 +00:00
Krzysztof Parzyszek 40df124eda [Hexagon] Remove unused ISD opcodes, NFC
llvm-svn: 307580
2017-07-10 20:13:44 +00:00
Matt Arsenault 9cff06f37b AMDGPU: Allow SIShrinkInstructions to fold FrameIndexes
llvm-svn: 307576
2017-07-10 20:04:35 +00:00
Matt Arsenault 6c29c5acfe AMDGPU: Allow SIShrinkInstructions to work in non-SSA
Immediates can be folded as long as the immediate is a vreg.

Also undo commuting instructions if it didn't fold an immediate.

llvm-svn: 307575
2017-07-10 19:53:57 +00:00
Matt Arsenault fda5318204 AMDGPU: Remove unnecessary check for constant operands
An instruction that has an immediate operand can't reach
this point. This is only called for a freshly shrunk instruction,
which prevously couldn't have had a literal constant operand.
This was also not conservative enough since it woudl also have
had to filter other constant-like inputs like frame indexes.

llvm-svn: 307574
2017-07-10 19:33:38 +00:00
Konstantin Zhuravlyov a46241909a AMDGPU: Do not test for SI in getIsaVersion
SI is being tested by isa version in the first two if statements of the function.

llvm-svn: 307573
2017-07-10 19:24:05 +00:00
Krzysztof Parzyszek df4a05d6fb [Hexagon] Fix check for HMOTF_ConstExtend operand flag
This fixes https://llvm.org/PR33718.

llvm-svn: 307566
2017-07-10 18:38:52 +00:00
Krzysztof Parzyszek 0ac065f318 [Hexagon] Handle Hexagon-specific machine operand target flags in MIR
llvm-svn: 307564
2017-07-10 18:31:02 +00:00
Tony Jiang acefbcf38e [PPC CodeGen] Expand the bitreverse.i64 intrinsic.
Differential Revision: https://reviews.llvm.org/D34908
Fix PR: https://bugs.llvm.org/show_bug.cgi?id=33093

llvm-svn: 307563
2017-07-10 18:11:23 +00:00
Lei Huang 168d14b143 [PowerPC] Reduce register pressure by not materializing a constant just for use as an index register for X-Form loads/stores.
For this example:
float test (int *arr) {
    return arr[2];
}

We currently generate the following code:
  li r4, 8
  lxsiwax f0, r3, r4
  xscvsxdsp f1, f0

With this patch, we will now generate:
  addi r3, r3, 8
  lxsiwax f0, 0, r3
  xscvsxdsp f1, f0

Originally reported in: https://bugs.llvm.org/show_bug.cgi?id=27204
Differential Revision: https://reviews.llvm.org/D35027

llvm-svn: 307553
2017-07-10 16:44:45 +00:00
Andrew V. Tischenko ae9d6db769 [X86] Model 256-bit AVX instructions in the AMD Jaguar scheduler Part-1 (PR28573).
The new version of the model is definitely faster.

Differential Revision:
https://reviews.llvm.org/D35198

llvm-svn: 307552
2017-07-10 16:36:03 +00:00
Hiroshi Inoue a86c920b1e fix typos in comments and error messages; NFC
llvm-svn: 307533
2017-07-10 12:44:25 +00:00
Javed Absar fb3210aa05 [ARM] Tidy up ARMBaseRegisterInfo implementation. NFC
Clean up ARMBaseRegisterInfo implementation a bit.
Differential Revision: https://reviews.llvm.org/D35116

llvm-svn: 307531
2017-07-10 10:42:55 +00:00
Gadi Haber f4d154c089 This patch completely replaces the scheduling information for the SandyBridge architecture target by modifying the file X86SchedSandyBridge.td located under the X86 Target.
The SandyBridge architects have provided us with a more accurate information about each instruction latency, number of uOPs and used ports and I used it to replace the existing estimated SNB instructions scheduling and to add missing scheduling information.

Please note that the patch extensively affects the X86 MC instr scheduling for SNB.

Also note that this patch will be followed by additional patches for the remaining target architectures HSW, IVB, BDW, SKL and SKX.

The updated and extended information about each instruction includes the following details:
•static latency of the instruction
•number of uOps from which the instruction consists of
•all ports used by the instruction's' uOPs

For example, the following code dictates that instructions, ADC64mr, ADC8mr, SBB64mr, SBB8mr have a static latency of 9 cycles. Each of these instructions is decoded into 6 micro operations which use ports 4, ports 2 or 3 and port 0 and ports 0 or 1 or 5:

def SBWriteResGroup94 : SchedWriteRes<[SBPort4,SBPort23,SBPort0,SBPort015]> {
let Latency = 9;
let NumMicroOps = 6;
let ResourceCycles = [1,2,2,1];

}
def: InstRW<[SBWriteResGroup94], (instregex "ADC64mr")>;
def: InstRW<[SBWriteResGroup94], (instregex "ADC8mr")>;
def: InstRW<[SBWriteResGroup94], (instregex "SBB64mr")>;
def: InstRW<[SBWriteResGroup94], (instregex "SBB8mr")>;

Note that apart for the header, most of the X86SchedSandyBridge.td file was generated by a script.

Reviewers: zvi, chandlerc, RKSimon, m_zuckerman, craig.topper, igorb

Differential Revision:  https://reviews.llvm.org/D35019#inline-304691

llvm-svn: 307529
2017-07-10 09:53:16 +00:00
Igor Breger d8b51e134e [GlobalISel][X86] Support G_LOAD/G_STORE i1.
Summary: Support G_LOAD/G_STORE i1.

Reviewers: zvi, guyblank

Reviewed By: guyblank

Subscribers: rovka, kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D35178

llvm-svn: 307527
2017-07-10 09:26:09 +00:00
Igor Breger d48c5e4855 [GlobalISel][X86] extend G_ZEXT support.
Summary:
Mark G_ZEXT/G_SEXT i1 to i8/i16,  i8 to i16 as legal.
Support G_ZEXT i1 to i8/i16 instruction selection ( C++ code).
This patch requred to support G_LOAD/G_STORE i1.

Reviewers: zvi, guyblank

Reviewed By: guyblank

Subscribers: rovka, llvm-commits, kristof.beyls

Differential Revision: https://reviews.llvm.org/D35177

llvm-svn: 307526
2017-07-10 09:07:34 +00:00
Hiroshi Inoue 70b1af5921 fix formatting; NFC
llvm-svn: 307523
2017-07-10 06:32:52 +00:00
Simon Pilgrim 4050c77d33 [X86] Allow GHC calling convention to use YMM and ZMM registers
GHC 8.4 will know how to use YMM and ZMM registers for calls.

Submitted on behalf of @bgamari (Ben Gamari)

Differential Revision: https://reviews.llvm.org/D34854 

llvm-svn: 307504
2017-07-09 16:57:10 +00:00
Craig Topper fde4723ebe [IR] Add Type::isIntOrIntVectorTy(unsigned) similar to the existing isIntegerTy(unsigned), but also works for vectors.
llvm-svn: 307492
2017-07-09 07:04:03 +00:00
Craig Topper 95d2347ae1 [IR] Make use of Type::isPtrOrPtrVectorTy/isIntOrIntVectorTy/isFPOrFPVectorTy to shorten code. NFC
llvm-svn: 307491
2017-07-09 07:04:00 +00:00
Hiroshi Inoue 713b5ba2de fix trivial typos; NFC
sucessor -> successor 

llvm-svn: 307488
2017-07-09 05:54:44 +00:00
Simon Pilgrim d362d27c27 [AMDGPU] Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307485
2017-07-08 19:50:03 +00:00
Simon Pilgrim 9e90152363 [AArch64] Fix -Wimplicit-fallthrough warnings. NFCI.
Add breaks - doesn't affect results as both GPR/FPU both check for 32/64 bit sizes. So will still default to GenericOps in the same way.

llvm-svn: 307484
2017-07-08 19:28:24 +00:00
Simon Pilgrim e2d84d953e [ARM] Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307480
2017-07-08 18:42:04 +00:00
Simon Pilgrim d053634d6f Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307473
2017-07-08 15:26:26 +00:00
Sanjay Patel 18ee908ca2 [x86] add SBB optimization for SETBE (ule) condition code
x86 scalar select-of-constants (Cond ? C1 : C2) combining/lowering is a mess 
with missing optimizations. We handle some patterns, but miss logical variants.

To clean that up, we should convert all select-of-constants to logic/math and 
enhance the combining for the expected patterns from that. Selecting 0 or -1 
needs extra attention to produce the optimal code as shown here.

Attempt to verify that all of these IR forms are logically equivalent:
http://rise4fun.com/Alive/plxs

Earlier steps in this series:
rL306040
rL306072
rL307404 (D34652)

As acknowledged in the earlier review, there's a possibility that some Intel
uarch would prefer to produce an xor to clear the fake register operand with
sbb %eax, %eax. This will likely need to be addressed in a separate pass.

llvm-svn: 307471
2017-07-08 14:04:48 +00:00
Eric Christopher 8737f0650c Remove a variable that was only used in asserts and had a duplicate copy in something we did use anyhow.
llvm-svn: 307457
2017-07-08 01:03:29 +00:00
Lei Huang 317104183d [PowerPC] NFC : Common up definitions of isIntS16Immediate and update parameter to int16_t
llvm-svn: 307442
2017-07-07 21:12:35 +00:00
Tony Jiang c260e0eb56 [PPC CodeGen] Expand the bitreverse.i32 intrinsic.
Differential Revision: https://reviews.llvm.org/D33572
Fix PR: https://bugs.llvm.org/show_bug.cgi?id=33093

llvm-svn: 307413
2017-07-07 16:41:55 +00:00
Simon Pilgrim cb07d67a5c Fix some more -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307411
2017-07-07 16:40:06 +00:00
Matthew Simpson 12eaef75ce [ARM] Implement interleaved access bug fix from r306334
r306334 fixed a bug in AArch64 dealing with wide interleaved accesses having
pointer types. The bug also exists in ARM, so this patch copies over the fix.

llvm-svn: 307409
2017-07-07 16:15:05 +00:00
Sam Kolton 10ac2fd2eb [AMDGPU] Assembler: refactor convert methods (VOP3 and MIMG)
Summary: Simplified converter methods for VOP3 and MIMG.

Reviewers: dp, artem.tamazov

Subscribers: arsenm, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, vpykhtin, t-tye

Differential Revision: https://reviews.llvm.org/D35047

llvm-svn: 307407
2017-07-07 15:21:52 +00:00
Sanjay Patel dd36f75733 [x86] add SBB optimization for SETAE (uge) condition code
x86 scalar select-of-constants (Cond ? C1 : C2) combining/lowering is a mess 
with missing optimizations. We handle some patterns, but miss logical variants.

To clean that up, we should convert all select-of-constants to logic/math and 
enhance the combining for the expected patterns from that. DAGCombiner already 
has the foundation to allow the transforms, so we just need to fill in the holes 
for x86 math op lowering. Selecting 0 or -1 needs extra attention to produce the
optimal code as shown here.

Attempt to verify that all of these IR forms are logically equivalent:
http://rise4fun.com/Alive/plxs

Earlier steps in this series:
rL306040
rL306072

Differential Revision: https://reviews.llvm.org/D34652

llvm-svn: 307404
2017-07-07 14:56:20 +00:00
Dmitry Preobrazhensky b2d24e23ce [AMDGPU][mc][gfx9] Added support of op_sel/op_sel_hi for V_MAD_MIX*
See https://bugs.llvm.org//show_bug.cgi?id=33595

Reviewers: vpykhtin, artem.tamazov, arsenm

Differential Revision: https://reviews.llvm.org/D35021

llvm-svn: 307402
2017-07-07 14:29:06 +00:00
Simon Pilgrim 16ee09bf72 [Lanai] Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307396
2017-07-07 13:22:47 +00:00
Simon Pilgrim 087e87d595 [Hexagon] Fix some more -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307395
2017-07-07 13:21:43 +00:00
Simon Pilgrim 8b4dc53326 [AArch64] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307393
2017-07-07 13:03:28 +00:00
Florian Hahn d4550baf3b [AArch64] Use 16 bytes as preferred function alignment on Cortex-A57.
Summary:
This change gives a 0.89% speed on execution time, a 0.94% improvement
in benchmark scores and a 0.62% increase in binary size on a Cortex-A57.
These numbers are the geomean results on a wide range of benchmarks from
the test-suite, SPEC2000, SPEC2006 and a range of proprietary suites.

The software optimization guide for the Cortex-A57 recommends 16 byte
branch alignment.

Reviewers: t.p.northover, mcrosier, javed.absar, kristof.beyls, sbaranga

Reviewed By: kristof.beyls

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D34954

llvm-svn: 307389
2017-07-07 10:43:01 +00:00
Simon Pilgrim 6c1695c6f7 [PowerPC] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307382
2017-07-07 10:21:44 +00:00
Simon Pilgrim 0f5b35059d [AMDGPU] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307381
2017-07-07 10:18:57 +00:00
Florian Hahn e3666ec9d6 [AArch64] Use 16 bytes as preferred function alignment on Cortex-A72.
Summary:
This change gives a 0.34% speed on execution time, a 0.61% improvement
in benchmark scores and a 0.57% increase in binary size on a Cortex-A72.
These numbers are the geomean results on a wide range of benchmarks from
the test-suite, SPEC2000, SPEC2006 and a range of proprietary suites.

The software optimization guide for the Cortex-A72 recommends 16 byte
branch alignment.


Reviewers: t.p.northover, kristof.beyls, rengolin, sbaranga, mcrosier, javed.absar

Reviewed By: kristof.beyls

Subscribers: llvm-commits, aemerson

Differential Revision: https://reviews.llvm.org/D34961

llvm-svn: 307380
2017-07-07 10:15:49 +00:00
Simon Pilgrim ecedd8d803 [Sparc] Fix -Wimplicit-fallthrough warning. NFCI.
llvm-svn: 307378
2017-07-07 10:14:46 +00:00
Simon Pilgrim 8c4069e842 [SystemZ] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307376
2017-07-07 10:07:09 +00:00
Simon Pilgrim ce1fb22c6a [Arm] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307375
2017-07-07 10:05:45 +00:00
Simon Pilgrim adb80fbaf4 [Hexagon] Fix -Wimplicit-fallthrough warnings. NFCI.
llvm-svn: 307374
2017-07-07 10:04:12 +00:00
Diana Picus 77367378ac [ARM] GlobalISel: Fixup r307365
Rename member DebugLoc -> DbgLoc (so it doesn't conflict with the class
name).

llvm-svn: 307366
2017-07-07 08:53:27 +00:00
Diana Picus 5b91653840 [ARM] GlobalISel: Select hard G_FCMP for s32
We lower to a sequence consisting of:
- MOVi 0 into a register
- VCMPS to do the actual comparison and set the VFP flags
- FMSTAT to move the flags out of the VFP unit
- MOVCCi to either use the "zero register" that we have previously set
  with the MOVi, or move 1 into the result register, based on the values
  of the flags

As was the case with soft-float, for some predicates (one, ueq) we
actually need two comparisons instead of just one. When that happens, we
generate two VCMPS-FMSTAT-MOVCCi sequences and chain them by means of
using the result of the first MOVCCi as the "zero register" for the
second one. This is a bit overkill, since one comparison followed by
two non-flag-setting conditional moves should be enough. In any case,
the backend manages to CSE one of the comparisons away so it doesn't
matter much.

Note that unlike SelectionDAG and FastISel, we always use VCMPS, and not
VCMPES. This makes the code a lot simpler, and it also seems correct
since the LLVM Lang Ref defines simple true/false returns if the
operands are QNaN's. For SNaN's, even VCMPS throws an Invalid Operand
exception, so they won't be slipping through unnoticed.

Implementation-wise, this introduces a template so we can share the same
code that we use for handling integer comparisons, since the only
differences are in the details (exact opcodes to be used etc). Hopefully
this will be easy to extend to s64 G_FCMP.

llvm-svn: 307365
2017-07-07 08:39:04 +00:00
Matthias Braun 1b54aa5879 LiveRegUnits: Rename accumulateBackward()->accumulate()
Contrary to the stepForward()/stepBackward() method accumulate() doesn't
have a direction as defs, uses and clobbers all have the same effect.

Also improve the documentation comment.

llvm-svn: 307351
2017-07-07 03:02:17 +00:00
Sean Fertile 9cd1cdf814 Extend memcpy expansion in Transform/Utils to handle wider operand types.
Adds loop expansions for known-size and unknown-sized memcpy calls, allowing the
target to provide the operand types through TTI callbacks. The default values
for the TTI callbacks use int8 operand types and matches the existing behaviour
if they aren't overridden by the target.

Differential revision: https://reviews.llvm.org/D32536

llvm-svn: 307346
2017-07-07 02:00:06 +00:00