Commit Graph

36215 Commits

Author SHA1 Message Date
Zlatko Buljan f034021443 [mips][microMIPS] Implement TLBINV and TLBINVF instructions
Differential Revision: http://reviews.llvm.org/D16849

llvm-svn: 261211
2016-02-18 14:10:52 +00:00
Krzysztof Parzyszek 6895b2ceb2 [Hexagon] Add support for __builtin_prefetch
llvm-svn: 261210
2016-02-18 13:58:38 +00:00
Krzysztof Parzyszek 39686cf98e [Hexagon] Update the callee-saved register set for EH-aware functions
llvm-svn: 261208
2016-02-18 13:41:05 +00:00
Simon Pilgrim 05e48b95eb [X86][SSE] Improve PSHUFB shuffle mask decoding.
In cases where the PSHUFB shuffle mask is shared it might not be bitcasted to a vXi8 byte vector. This patch adds support for decoding these wider shuffle masks from the ConstantPool.

The test case in question makes use of this to recognise the shuffle mask is an unary UNPCKL pattern and simplifies accordingly.

llvm-svn: 261201
2016-02-18 10:17:40 +00:00
Nikolay Haustov 10813a4efa Test commit access.
llvm-svn: 261199
2016-02-18 10:02:12 +00:00
Michael Zuckerman 724dc3b20c [AVX512][PRORQ][PRORD] Change imm8 to int
Differential Revision: http://reviews.llvm.org/D17024

llvm-svn: 261198
2016-02-18 09:52:12 +00:00
Dan Gohman d85ab7fc10 [WebAssembly] Don't use setRequiresStructuredCFG(true).
While we still do want reducible control flow, the RequiresStructuredCFG
flag imposes more strict structure constraints than WebAssembly wants.
Unsetting this flag enables critical edge splitting and tail merging.

Also, disable TailDuplication explicitly, as it doesn't support virtual
registers, and was previously only disabled by the RequiresStructuredCFG
flag.

llvm-svn: 261190
2016-02-18 06:32:53 +00:00
Tom Stellard e1818af8c5 [AMDGPU] Disassembler: Added basic disassembler for AMDGPU target
Changes:

- Added disassembler project
- Fixed all decoding conflicts in .td files
- Added DecoderMethod=“NONE” option to Target.td that allows to
  disable decoder generation for an instruction.
- Created decoding functions for VS_32 and VReg_32 register classes.
- Added stubs for decoding all register classes.
- Added several tests for disassembler

Disassembler only supports:

- VI subtarget
- VOP1 instruction encoding
- 32-bit register operands and inline constants

[Valery]

One of the point that requires to pay attention to is how decoder
conflicts were resolved:

- Groups of target instructions were separated by using different
  DecoderNamespace (SICI, VI, CI) using similar to AssemblerPredicate
  approach.

- There were conflicts in IMAGE_<> instructions caused by two
  different reasons:

1. dmask wasn’t specified for the output (fixed)
2. There are image instructions that differ only by the number of
   the address components but have the same encoding by the HW spec. The
   actual number of address components is determined by the HW at runtime
   using image resource descriptor starting from the VGPR encoded in an
   IMAGE instruction. This means that we should choose only one instruction
   from conflicting group to be the rule for decoder. I didn’t find the way
   to disable decoder generation for an arbitrary instruction and therefore
   made a onelinear fix to tablegen generator that would suppress decoder
   generation when DecoderMethod is set to “NONE”. This is a change that
   should be reviewed and submitted first. Otherwise I would need to
   specify different DecoderNamespace for every instruction in the
   conflicting group. I haven’t checked yet if DecoderMethod=“NONE” is not
   used in other targets.
3. IMAGE_GATHER decoder generation is for now disabled and to be
   done later.

[/Valery]

Patch By: Sam Kolton

Differential Revision: http://reviews.llvm.org/D16723

llvm-svn: 261185
2016-02-18 03:42:32 +00:00
Derek Schuff 71434ff642 [WebAssembly] Disable register stackification and coloring when not optimizing
These passes are optimizations, and should be disabled when not
optimizing.
Also create an MCCodeGenInfo so the opt level is correctly plumbed to
the backend pass manager.
Also remove the command line flag for disabling register coloring;
running llc with -O0 should now be useful for debugging, so it's not
necessary.

Differential Revision: http://reviews.llvm.org/D17327

llvm-svn: 261176
2016-02-17 23:20:43 +00:00
Tim Northover 7687bcee4a AArch64: always clear kill flags up to last eliminated copy
After r261154, we were only clearing flags if the known-zero register was
originally live-in to the basic block, but we have to do it even if not when
more than one COPY has been eliminated, otherwise the user of the first COPY
may still have <kill> marked.

E.g.

BB#N:
    %X0 = COPY %XZR
    STRXui %X0<kill>, <fi#0>
    %X0 = COPY %XZR
    STRXui %X0<kill>, <fi#1>

We can eliminate both copies, X0 is not live-in, but we must clear the kill on
the first store.

Unfortunately, I've been unable to come up with a non-fragile test for this.
I've only seen it in the wild with regalloc-created spills, and attempts to
reproduce that in a reasonable way run afoul of COPY coalescing. Even volatile
asm clobbers were moved around. Should fix the aarch64 bot though.

llvm-svn: 261175
2016-02-17 23:07:04 +00:00
Amaury Sechet 22d2878399 Move LLVMCreateTargetData and LLVMDisposeTargetData together. NFC
llvm-svn: 261172
2016-02-17 22:41:09 +00:00
Tim Northover 3f2285615a AArch64: improve redundant copy elimination.
Mostly, this fixes the bug that if the CBZ guaranteed Xn but Wn was used, we
didn't sort out the use-def chain properly.

I've also made it check more than just the last instruction for a compatible
CBZ (so it can cope without fallthroughs). I'd have liked to do that
separately, but it's helps writing the test.

Finally, I removed some custom loops in favour of MachineInstr helpers and
refactored the control flow to flatten it and avoid possibly quadratic
iterations in blocks with many copies. NFC for these, just a general tidy-up.

llvm-svn: 261154
2016-02-17 21:16:53 +00:00
Colin LeMahieu 5e552d141f [Hexagon] Replacing reference/dereference with reference cast.
llvm-svn: 261133
2016-02-17 18:50:21 +00:00
Nico Weber 32ac273a91 Remove superfluous semicolon.
llvm-svn: 261128
2016-02-17 18:48:08 +00:00
David Majnemer 7e5937b775 [WinEH] Optimize WinEH state stores
32-bit x86 Windows targets use a linked-list of nodes allocated on the
stack, referenced to via thread-local storage.  The personality routine
interprets one of the fields in the node as a 'state number' which
indicates where the personality routine should transfer control.

State transitions are possible only before call-sites which may throw
exceptions.  Our previous scheme had us update the state number before
all call-sites which may throw.

Instead, we can try to minimize the number of times we need to store by
reasoning about the nearest store which dominates the current call-site.
If the last store agrees with the current call-site, then we know that
the state-update is redundant and can be elided.

This is largely straightforward: an RPO walk of the blocks allows us to
correctly forward propagate the information when the function is a DAG.
Currently, loops are not handled optimally and may trigger superfluous
state stores.

Differential Revision: http://reviews.llvm.org/D16763

llvm-svn: 261122
2016-02-17 18:37:11 +00:00
Colin LeMahieu 3d3ff650d6 [Hexagon] Loop instructions don't need special processing. Extension and fitting is performed by generic code and the comment is incorrect, loops don't have a separate extended opcode.
llvm-svn: 261118
2016-02-17 18:14:05 +00:00
Justin Lebar f9b5add6ad [NVPTX] Annotate convergent intrinsics as convergent.
Summary:
Previously the machine instructions for bar.sync &co. were not marked as
convergent.  This resulted in some MI passes (such as TailDuplication,
fixed in an upcoming patch) doing unsafe things to these instructions.

Reviewers: jingyue

Subscribers: llvm-commits, tra, jholewinski, hfinkel

Differential Revision: http://reviews.llvm.org/D17318

llvm-svn: 261115
2016-02-17 17:46:54 +00:00
Justin Lebar d596ec93ce [NVPTX] Annotate call machine instructions as calls.
Summary:
Otherwise we'll try to do unsafe optimizations on these MIs, such as
sinking loads below calls.

(I suspect that this is not the only bug in the NVPTX instruction
tablegen files; I need to comb through them.)

Reviewers: jholewinski, tra

Subscribers: jingyue, jhen, llvm-commits

Differential Revision: http://reviews.llvm.org/D17315

llvm-svn: 261113
2016-02-17 17:46:50 +00:00
Krzysztof Parzyszek de697d4d40 [Hexagon] Fold object construction into map::insert
llvm-svn: 261096
2016-02-17 15:02:07 +00:00
Igor Breger ac02f1bb62 AVX512: Fix LowerMSCATTER() return value.
Bug description:
  The bug was discovered when test was compiled with -O0.
  In case scatter result is DAG root , VectorLegalizer failed (assert) due to LowerMSCATTER() return kmask as result.
Change LowerMSCATTER() to return chain as original node do.

Differential Revision: http://reviews.llvm.org/D17331

llvm-svn: 261090
2016-02-17 14:04:33 +00:00
Scott Egerton 219fae9e36 [mips] Removed the SHF_ALLOC flag and the SHT_REL flag from the .pdr section.
This section is used for debug information and has no need to be
in memory at runtime. This patch also fixes an error when compiling
the Linux kernel. The error is that there are relocations within the
.pdr section in a VDSO. SHT_REL was removed as it is a section type
and not a section flag, therefore it does not make sense for it to
be there. With this patch, LLVM now emits the same flags as
the GNU assembler.

llvm-svn: 261083
2016-02-17 11:15:16 +00:00
Simon Pilgrim c5b5dcb985 [X86][AVX] Support bit-blend integer shuffles for 256-bit integer vectors
AVX1 doesn't support the shuffling of 256-bit integer vectors. For 32/64-bit elements we get around this by shuffling as float/double but for 8/16-bit elements (assuming they can't widen) we currently just split, shuffle as 128-bit vectors and concatenate the results back.

This patch adds the ability to lower using the bit-blend patterns before defaulting to the splitting behaviour.

Part 2 of 2

Differential Revision: http://reviews.llvm.org/D17292

llvm-svn: 261082
2016-02-17 10:50:06 +00:00
Simon Pilgrim a50e8d3627 [X86][AVX] Support bit-mask integer shuffles for 256-bit integer vectors
AVX1 doesn't support the shuffling of 256-bit integer vectors. For 32/64-bit elements we get around this by shuffling as float/double but for 8/16-bit elements (assuming they can't widen) we currently just split, shuffle as 128-bit vectors and concatenate the results back.

This patch adds the ability to lower using the bit-mask patterns before defaulting to the splitting behaviour. In some cases this ends up matching what AVX2 would do anyhow or what AVX1 does on the split vectors.

Part 1 of 2

Differential Revision: http://reviews.llvm.org/D17292

llvm-svn: 261081
2016-02-17 10:37:49 +00:00
Simon Pilgrim 9904924e6b [X86][SSE] Tidyup BUILD_VECTOR operand collection. NFCI.
Avoid reuse of operand variables, keep them local to a particular lowering - the operand collection is unique to each case anyhow.

Renamed from V to Ops to more closely match their purpose.

llvm-svn: 261078
2016-02-17 10:12:30 +00:00
Benjamin Kramer 98520ca73b [Hexagon] cast<> a reference instead of referencing + dereferencing.
llvm-svn: 261077
2016-02-17 09:28:45 +00:00
Hans Wennborg 84047896b9 Revert r260979 "[X86] Enable the LEA optimization pass by default."
Asserts are still firing in Chromium builds. PR26575.

llvm-svn: 261058
2016-02-17 02:49:59 +00:00
JF Bastien a0d5347ee9 WebAssembly: update expected failures
r261050 seems to inadvertently fix the assertion failure.

llvm-svn: 261051
2016-02-17 01:59:23 +00:00
Dan Gohman 476ffcec04 [WebAssembly] Call memcpy for large byval copies.
This fixes very slow compilation on
test/CodeGen/Generic/2010-11-04-BigByval.ll . Note that MaxStoresPerMemcpy
and friends are not yet carefully tuned so the cutoff point is currently
somewhat arbitrary. However, it's important that there be a cutoff point
so that we don't emit unbounded quantities of loads and stores.

llvm-svn: 261050
2016-02-17 01:43:37 +00:00
JF Bastien 188ca894c2 WebAssembly: update expected test failures
r261032 adds frame address support.

llvm-svn: 261044
2016-02-17 00:34:15 +00:00
Reid Kleckner 8de35fef3d [X86] Fix a shrink-wrapping miscompile around __chkstk
__chkstk clobbers EAX. If EAX is live across the prologue, then we have
to take extra steps to save it. We already had code to do this if EAX
was a register parameter. This change adapts it to work when shrink
wrapping is used.

llvm-svn: 261039
2016-02-17 00:17:33 +00:00
Dan Gohman 1d547bf566 [WebAssembly] Use SDValue::getConstantOperandVal. NFC.
llvm-svn: 261037
2016-02-17 00:14:03 +00:00
Dan Gohman 94c6566055 [WebAssembly] Implement __builtin_frame_address.
Differential Revision: http://reviews.llvm.org/D17307

llvm-svn: 261032
2016-02-16 23:48:04 +00:00
Ahmed Bougacha f3cccab1e0 [X86] Remove the now-unused X86ISD::PSIGN. NFC.
llvm-svn: 261025
2016-02-16 22:14:12 +00:00
Ahmed Bougacha af60a429c9 [X86] Generalize logic blend of (x, -x) combine to match (-x, x).
I suspect this is what let PR26110 lie dormant for so long.

llvm-svn: 261024
2016-02-16 22:14:07 +00:00
Ahmed Bougacha 132fbf5476 [X86] Don't turn (c?-v:v) into (c?-v:0) by blindly using PSIGN.
Currently, we sometimes miscompile this vector pattern:
    (c ? -v : v)
We lower it to (because "c" is <4 x i1>, lowered as a vector mask):
    (~c & v) | (c & -v)

When we have SSSE3, we incorrectly lower that to PSIGN, which does:
    (c < 0 ? -v : c > 0 ? v : 0)
in other words, when c is either all-ones or all-zero:
    (c ? -v : 0)
While this is an old bug, it rarely triggers because the PSIGN combine
is too sensitive to operand order. This will be improved separately.

Note that the PSIGN tests are also incorrect. Consider:
    %b.lobit = ashr <4 x i32> %b, <i32 31, i32 31, i32 31, i32 31>
    %sub = sub nsw <4 x i32> zeroinitializer, %a
    %0 = xor <4 x i32> %b.lobit, <i32 -1, i32 -1, i32 -1, i32 -1>
    %1 = and <4 x i32> %a, %0
    %2 = and <4 x i32> %b.lobit, %sub
    %cond = or <4 x i32> %1, %2
    ret <4 x i32> %cond
if %b is zero:
    %b.lobit = <4 x i32> zeroinitializer
    %sub = sub nsw <4 x i32> zeroinitializer, %a
    %0 = <4 x i32> <i32 -1, i32 -1, i32 -1, i32 -1>
    %1 = <4 x i32> %a
    %2 = <4 x i32> zeroinitializer
    %cond = or <4 x i32> %a, zeroinitializer
    ret <4 x i32> %a
whereas we currently generate:
    psignd %xmm1, %xmm0
    retq
which returns 0, as %xmm1 is 0.

Instead, use a pure logic sequence, as described in:
https://graphics.stanford.edu/~seander/bithacks.html#ConditionalNegate

Fixes PR26110.

Differential Revision: http://reviews.llvm.org/D17181

llvm-svn: 261023
2016-02-16 22:14:03 +00:00
Ahmed Bougacha f211f685e4 [X86] Extract PSIGN/BLENDVP combine. NFC.
llvm-svn: 261021
2016-02-16 22:13:55 +00:00
Ahmed Bougacha 7502768c6d [X86] Extract ANDNP combine. NFC.
This makes it IMO more readable and reduces indentation.

llvm-svn: 261020
2016-02-16 22:13:49 +00:00
Derek Schuff 25628d3362 [WebAssembly] Update torture test expectations
These were fixed with r260978

llvm-svn: 261017
2016-02-16 21:52:06 +00:00
Derek Schuff f8f8f093aa [WebAssemly] Don't move calls or stores past intervening loads
The register stackifier currently checks for intervening stores (and
loads that may alias them) but doesn't account for the fact that the
instruction being moved may affect intervening loads.

Differential Revision: http://reviews.llvm.org/D17298

llvm-svn: 261014
2016-02-16 21:44:19 +00:00
Colin LeMahieu ecef1d9cbc [Hexagon] Adding relocation for code size, cold path optimization allowing a 23-bit 4-byte aligned relocation to be a valid instruction encoding.
The usual way to get a 32-bit relocation is to use a constant extender which doubles the size of the instruction, 4 bytes to 8 bytes.

Another way is to put a .word32 and mix code and data within a function.  The disadvantage is it's not a valid instruction encoding and jumping over it causes prefetch stalls inside the hardware.

This relocation packs a 23-bit value in to an "r0 = add(rX, #a)" instruction by overwriting the source register bits.  Since r0 is the return value register, if this instruction is placed after a function call which return void, r0 will be filled with an undefined value, the prefetch won't be confused, and the callee can access the constant value by way of the link register.

llvm-svn: 261006
2016-02-16 20:38:17 +00:00
Jun Bum Lim b389d9b9af [AArch64] Add pass to remove redundant copy after RA
Summary:
This change will add a pass to remove unnecessary zero copies in target blocks
of cbz/cbnz instructions. E.g., the copy instruction in the code below can be
removed because the cbz jumps to BB1 when x0 is zero :
  BB0:
    cbz x0, .BB1
  BB1:
    mov x0, xzr

Jun

Reviewers: gberry, jmolloy, HaoLiu, MatzeB, mcrosier

Subscribers: mcrosier, mssimpso, haicheng, bmakam, llvm-commits, aemerson, rengolin

Differential Revision: http://reviews.llvm.org/D16203

llvm-svn: 261004
2016-02-16 20:02:39 +00:00
Quentin Colombet ba2a01645b [GlobalISel] Re-apply r260922-260923 with MSVC-friendly code.
Original message:
Get rid of the ifdefs in TargetLowering.
Introduce a new API used only by GlobalISel: CallLowering.
This API will contain target hooks dedicated to call lowering.

llvm-svn: 260998
2016-02-16 19:26:02 +00:00
Derek Schuff aadc89c25d [WebAssembly] Insert COPY_LOCAL between CopyToReg and FrameIndex DAG nodes
CopyToReg nodes don't support FrameIndex operands. Other targets select
the FI to some LEA-like instruction, but since we don't have that, we
need to insert some kind of instruction that can take an FI operand and
produces a value usable by CopyToReg (i.e. in a vreg). So insert a dummy
copy_local between Op and its FI operand. This results in a redundant
copy which we should optimize away later (maybe in the post-FI-lowering
peephole pass).

Differential Revision: http://reviews.llvm.org/D17213

llvm-svn: 260987
2016-02-16 18:18:36 +00:00
Tom Stellard cc4c8718ed [AMDGPU] Rename $dst operand to $vdst for VOP instructions.
Summary: This change renames output operand for VOP instructions from dst to vdst. This is needed to enable decoding named operands for disassembler.

Reviewers: vpykhtin, tstellarAMD, arsenm

Subscribers: arsenm, llvm-commits, nhaustov

Projects: #llvm-amdgpu-spb

Differential Revision: http://reviews.llvm.org/D16920

llvm-svn: 260986
2016-02-16 18:14:56 +00:00
Andrey Turetskiy eab4e68650 [X86] Enable the LEA optimization pass by default.
Differential Revision: http://reviews.llvm.org/D16877

llvm-svn: 260979
2016-02-16 16:41:38 +00:00
Dan Gohman 442bfcec00 [WebAssembly] Switch from RPO sorting to topological sorting.
WebAssembly doesn't require full RPO; topological sorting is sufficient and
can preserve more of the MachineBlockPlacement ordering. Unfortunately, this
still depends a lot on heuristics, because while we use the
MachineBlockPlacement ordering as a guide, we can't use it in places where
it isn't topologically ordered. This area will require further attention.

llvm-svn: 260978
2016-02-16 16:22:41 +00:00
Aaron Ballman fc64ef1a15 Reverting r260922-260923; they cause link failures with MSVC.
http://lab.llvm.org:8011/builders/lldb-x86-windows-msvc2015/builds/15436/steps/build/logs/stdio
http://bb.pgr.jp/builders/msbuild-llvmclang-x64-msc18-DA/builds/961/steps/build_llvm/logs/stdio

llvm-svn: 260972
2016-02-16 15:29:06 +00:00
Dan Gohman 8aa237c3ca [WebAssembly] Create new registers instead of reusing old ones in RegStackify.
This avoids some complications updating LiveIntervals to be aware of the new
register lifetimes, because we can just compute new intervals from scratch
rather than describe how the old ones have been changed.

llvm-svn: 260971
2016-02-16 15:17:21 +00:00
Dan Gohman aa7429112e [WebAssembly] Implement support for custom NaN bit patterns.
llvm-svn: 260968
2016-02-16 15:14:23 +00:00
Andrey Turetskiy 1052ac2311 [X86] PR26575: Fix LEA optimization pass.
Add a missing check for a type of address displacement operand of the load/store instruction being a candidate for LEA substitution.

Ref: https://llvm.org/bugs/show_bug.cgi?id=26575

Differential Revision: http://reviews.llvm.org/D17261

llvm-svn: 260959
2016-02-16 12:47:45 +00:00