Commit Graph

40745 Commits

Author SHA1 Message Date
Sanjay Patel 4e1b5a53c7 [InstCombine] avoid infinite loop from shuffle-extract-insert sequence (PR30923)
Removing the limitation in visitInsertElementInst() causes several regressions
because we're not prepared to fold sequences of shuffles or inserts and extracts
separated by shuffles. Fixing that appears to be a difficult mission because we
are purposely trying to avoid creating shuffles with arbitrary shuffle masks
because some targets may choke on those.

https://llvm.org/bugs/show_bug.cgi?id=30923

llvm-svn: 286423
2016-11-10 00:15:14 +00:00
Peter Collingbourne 32ab3a817d Re-apply r286384, "X86: Introduce the "relocImm" ComplexPattern, which represents a relocatable immediate.", with a fix for 32-bit x86.
Teach X86InstrInfo::analyzeCompare() not to crash on CMP and SUB instructions
that take a global address operand.

llvm-svn: 286420
2016-11-09 23:53:43 +00:00
Dylan McKay 0d4778f841 [AVR] Add a selection of CodeGen tests
Summary: This adds all of the CodeGen tests which currently pass.

Reviewers: arsenm, kparzysz

Subscribers: japaric, wdng

Differential Revision: https://reviews.llvm.org/D26388

llvm-svn: 286418
2016-11-09 23:46:52 +00:00
Dylan McKay 3ffc449597 [AVR] Add all of the machine code test suite
Summary: This adds all of the AVR machine code tests.

Reviewers: arsenm, kparzysz

Subscribers: wdng, japaric

Differential Revision: https://reviews.llvm.org/D26387

llvm-svn: 286417
2016-11-09 23:46:25 +00:00
Tim Northover a9105be437 GlobalISel: translate invoke and landingpad instructions
Pretty bare-bones support for exception handling (no weird MSVC stuff, no SjLj
etc), but it should get things going.

llvm-svn: 286407
2016-11-09 22:39:54 +00:00
Dehao Chen 06e079a530 Update vectorization debug info unittest.
Summary:
The change will test the change in r286159.
The idea behind the change: Make the dbg location different between loop header and preheader/exit. Originally, dbg location 21 exists in 3 BBs: preheader, header, critical edge (exit). Update the debug location of inside the loop header from !21 to !22 so that it will reflect the correct location.

Reviewers: probinson

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D26428

llvm-svn: 286403
2016-11-09 22:25:19 +00:00
Sanjay Patel 600631daf3 [InstCombine] regenerate checks; NFC
llvm-svn: 286402
2016-11-09 22:21:58 +00:00
Sanjay Patel 16da6c466f [InstCombine] regenerate checks; NFC
llvm-svn: 286399
2016-11-09 21:41:34 +00:00
Krzysztof Parzyszek a540997ce4 [Hexagon] Separate Hexagon subreg indices for different register classes
For pairs of 32-bit registers: isub_lo, isub_hi.
For pairs of vector registers: vsub_lo, vsub_hi.

Add generic subreg indices: ps_sub_lo, ps_sub_hi, and a function
  HexagonRegisterInfo::getHexagonSubRegIndex(RegClass, GenericSubreg)
that returns the appropriate subreg index for RegClass.

llvm-svn: 286377
2016-11-09 16:19:08 +00:00
Krzysztof Parzyszek 601d7eb11a [Hexagon] Eliminate Insert4 pseudo-instruction, use combines instead
llvm-svn: 286368
2016-11-09 14:16:29 +00:00
Alexandros Lamprineas 0ee3ec2fe4 [ARM] Loop Strength Reduction crashes when targeting ARM or Thumb.
Scalar Evolution asserts when not all the operands of an Add Recurrence
Expression are loop invariants. Loop Strength Reduction should only
create affine Add Recurrences, so that both the start and the step of
the expression are loop invariants.

Differential Revision: https://reviews.llvm.org/D26185

llvm-svn: 286347
2016-11-09 08:53:07 +00:00
Craig Topper f334ac19ad [AVX-512] Add lowering to cvttpd2udq/cvttps2udq for fptoui v2f64/2f32 to 2i32
This patch adds support for fptoui to 2i32 from both 2f64 and 2f32, building on Simon's change for the signed version in r284459 and using AVX-512 instructions.

If we don't have VLX support we need to use a 512-bit operation for v2f64->v2i32 and extract the result.

It also recognises that cvttpd2udq zeroes the upper 64-bits of the xmm result.

Differential Revision: https://reviews.llvm.org/D26331

llvm-svn: 286345
2016-11-09 07:48:51 +00:00
Craig Topper 731bf9c5d6 [X86] Lower AVX512 and SSE intrinsics for CVTTPD2DQ to X86ISD::CVTTPD2DQ.
Summary: This allows the SSE intrinsic to use the EVEX instruction when available. It also fixes EVEX to not use a weird (v4i32 (fp_to_sint v2f64)) node and it merges some isel patterns. This also fixes some cases that weren't combining vzmovl with cvttpd2dq to remove extra moves.

Reviewers: delena, zvi, RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D26330

llvm-svn: 286344
2016-11-09 07:31:32 +00:00
Craig Topper ef1807fb73 [AVX-512] Add more varied alignments to tests for storing the lower 128-bits of a 256 or 512-bit subvector extract.
llvm-svn: 286343
2016-11-09 05:38:47 +00:00
Craig Topper 28e3dfc02b [AVX-512] Use alignedstore256 in patterns that look for stores of the lower 256-bits of a 512-bit vector to use a 256-bit aligned store.
Previously we were only checking for 16 byte alignment instead of 32 byte alignment. Fixes PR30947.

llvm-svn: 286342
2016-11-09 05:31:57 +00:00
Craig Topper abf5041537 [AVX-512] Add test cases to demonstrate PR30947. We accidentally use 32 byte aligned store instructions when the original store was only 16 byte aligned if the store is from the lower bits of a subvector extract.
llvm-svn: 286341
2016-11-09 05:31:53 +00:00
Craig Topper 5c842be9a0 [AVX-512] Make VBMI instruction set enabling imply that the BWI instruction set is also enabled.
Summary:
This is needed to make the v64i8 and v32i16 types legal for the 512-bit VBMI instructions. Fixes PR30912.

Reviewers: delena, zvi

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D26322

llvm-svn: 286339
2016-11-09 04:50:48 +00:00
Mehdi Amini b6a11a7879 Revert "[ThinLTO] Prevent exporting of locals used/defined in module level asm"
This reverts commit r286297.
Introduces a dependency from libAnalysis to libObject, which I missed
during the review.

llvm-svn: 286329
2016-11-09 01:45:13 +00:00
Dehao Chen 947dbe1254 Enable Loop Sink pass for functions that has profile.
Summary: For functions with profile data, we are confident that loop sink will be optimal in sinking code.

Reviewers: davidxl, hfinkel

Subscribers: mehdi_amini, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D26155

llvm-svn: 286325
2016-11-09 00:58:19 +00:00
Peter Collingbourne 58f7f0759f Bitcode: Change the BitcodeReader to use llvm::Error internally.
Differential Revision: https://reviews.llvm.org/D26430

llvm-svn: 286323
2016-11-09 00:51:04 +00:00
Sanjay Patel e104554412 [ValueTracking] recognize obfuscated variants of umin/umax
The smallest tests that expose this are codegen tests (because SelectionDAGBuilder::visitSelect() uses matchSelectPattern
to create UMAX/UMIN nodes), but it's also possible to see the effects in IR alone with folds of min/max pairs.

If these were written as unsigned compares in IR, InstCombine canonicalizes the unsigned compares to signed compares. 
Ie, running the optimizer pessimizes the codegen for this case without this patch:

define <4 x i32> @umax_vec(<4 x i32> %x) {
  %cmp = icmp ugt <4 x i32> %x, <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647>
  %sel = select <4 x i1> %cmp, <4 x i32> %x, <4 x i32> <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647>
  ret <4 x i32> %sel
}

$ ./opt umax.ll -S | ./llc -o - -mattr=avx

vpmaxud LCPI0_0(%rip), %xmm0, %xmm0

$ ./opt -instcombine umax.ll -S | ./llc -o - -mattr=avx

vpxor %xmm1, %xmm1, %xmm1
vpcmpgtd  %xmm0, %xmm1, %xmm1
vmovaps LCPI0_0(%rip), %xmm2    ## xmm2 = [2147483647,2147483647,2147483647,2147483647]
vblendvps %xmm1, %xmm0, %xmm2, %xmm0

Differential Revision: https://reviews.llvm.org/D26096

llvm-svn: 286318
2016-11-09 00:24:44 +00:00
Sanjay Patel 4e9d6cd354 [InstCombine] fix profitability equation for max-of-nots transform
As the test change shows, we can increase the critical path by adding
a 'not' instruction, so make sure that we're actually removing an
instruction if we do this transform.

This transform could also cause us to miss folds of min/max pairs.

llvm-svn: 286315
2016-11-09 00:13:11 +00:00
Adrian Prantl 3502f2089c Emit the DW_AT_type for a C++ static member definition
if it is more specific than the one in its DW_AT_specification.

If a static member is an array, the translation unit containing the
member definition may have a more specific type (including its length)
than TUs only seeing the class declaration. This patch adds a
DW_AT_type to the member's DW_TAG_variable in addition to the
DW_AT_specification in these cases. The member type in the
DW_AT_specification still shows the more generic type (without the
length) to avoid defeating type uniquing.

The DWARF standard discourages “duplicating” a DW_AT_type in a member
variable definition but doesn’t explicitly forbid it.  Having the more
specific type (with the array length) available is what allows the
debugger to print the contents of a static array member variable.

https://reviews.llvm.org/D26368
rdar://problem/28706946

llvm-svn: 286302
2016-11-08 22:11:38 +00:00
Teresa Johnson 6955feebf3 [ThinLTO] Prevent exporting of locals used/defined in module level asm
Summary:
This patch uses the same approach added for inline asm in r285513 to
similarly prevent promotion/renaming of locals used or defined in module
level asm.

All static global values defined in normal IR and used in module level asm
should be included on either the llvm.used or llvm.compiler.used global.
The former were already being flagged as NoRename in the summary, and
I've simply added llvm.compiler.used values to this handling.

Module level asm may also contain defs of values. We need to prevent
export of any refs to local values defined in module level asm (e.g. a
ref in normal IR), since that also requires renaming/promotion of the
local. To do that, the summary index builder looks at all values in the
module level asm string that are not marked Weak or Global, which is
exactly the set of locals that are defined. A summary is created for
each of these local defs and flagged as NoRename.

This required adding handling to the BitcodeWriter to look at GV
declarations to see if they have a summary (rather than skipping them
all).

Finally, added an assert to IRObjectFile::CollectAsmUndefinedRefs to
ensure that an MCAsmParser is available, otherwise the module asm parse
would silently fail. Initialized the asm parser in the opt tool for use
in testing this fix.

Fixes PR30610.

Reviewers: mehdi_amini

Subscribers: johanengelen, krasin, llvm-commits

Differential Revision: https://reviews.llvm.org/D26146

llvm-svn: 286297
2016-11-08 21:53:35 +00:00
Kuba Brecka a49dcbb743 [asan] Speed up compilation of large C++ stringmaps (tons of allocas) with ASan
This addresses PR30746, <https://llvm.org/bugs/show_bug.cgi?id=30746>. The ASan pass iterates over entry-block instructions and checks each alloca whether it's in NonInstrumentedStaticAllocaVec, which is apparently slow. This patch gathers the instructions to move during visitAllocaInst.

Differential Revision: https://reviews.llvm.org/D26380

llvm-svn: 286296
2016-11-08 21:30:41 +00:00
Andrew Kaylor 9604f34996 [BasicAA] Teach BasicAA to handle the inaccessiblememonly and inaccessiblemem_or_argmemonly attributes
Differential Revision: https://reviews.llvm.org/D26382

llvm-svn: 286294
2016-11-08 21:07:42 +00:00
Ulrich Weigand 05effca2d8 [SystemZ] Add missing FP extension instructions
This completes assembler / disassembler support for all BFP
instructions provided by the floating-point extensions facility.
The instructions added here are not currently used for codegen.

llvm-svn: 286285
2016-11-08 20:18:41 +00:00
Ulrich Weigand 4006e09d1d [SystemZ] Add program mask and addressing mode instructions
Add several instructions that operate on the program mask
or the addressing mode.  These are not really needed for
code generation under Linux, but are provided for completeness
for the assembler/disassembler.

llvm-svn: 286284
2016-11-08 20:17:02 +00:00
Ulrich Weigand fffc7110d6 [SystemZ] Model access registers as LLVM registers
Add the 16 access registers as LLVM registers.  This allows removing
a lot of special cases in the assembler and disassembler where we
were handling access registers; this can all just use the generic
register code now.

Also add a bunch of instructions to operate on access registers,
for assembler/disassembler use only.  No change in code generation
intended.

llvm-svn: 286283
2016-11-08 20:15:26 +00:00
Dan Gohman e81021a5cb [WebAssembly] Convert stackified IMPLICIT_DEF into constant 0.
Since IMPLIFIT_DEF instructions are omitted in the output, when the output
of an IMPLICIT_DEF instruction is stackified, the resulting register lacks
an explicit push, leading to a push/pop mismatch. Fix this by converting
such IMPLICIT_DEFs into CONST_I32 0 instructions so that they have explicit
pushes.

llvm-svn: 286274
2016-11-08 19:40:38 +00:00
Davide Italiano 1e77aaca8a [LibcallsShrinkWrap] This pass doesn't preserve the CFG.
For example, it invalidates the domtree, causing assertions
in later passes which need dominator infos. Make it preserve
GlobalsAA, as suggested by Eli.

Differential Revision:  https://reviews.llvm.org/D26381

llvm-svn: 286271
2016-11-08 19:18:20 +00:00
Nirav Dave e833c6c61a [MC][AArch64] Cleanup end-of-line parsing in AArch64 AsmParser.
Reviewers: t.p.northover, rengolin

Subscribers: llvm-commits, aemerson

Differential Revision: https://reviews.llvm.org/D26309

llvm-svn: 286265
2016-11-08 18:31:04 +00:00
Ulrich Weigand d2148caffc [SystemZ] Refactor branch and conditional instruction patterns
Rework patterns for branches, call & return instructions,
compare-and-branch, compare-and-trap, and conditional move
instructions.

In particular, simplify creation of patterns for the extended
opcodes of instructions that take a CC mask.

Also, use semantical instruction classes for all the instructions
instead of open-coding them in SystemZInstrInfo.td.

Adds a couple of the basic branch instructions (that are unused
for codegen) for the assembler/disassembler.

llvm-svn: 286263
2016-11-08 18:30:50 +00:00
Sanjay Patel 8625c43662 [InstCombine] move min/max tests to min/max test file; NFC
llvm-svn: 286256
2016-11-08 18:12:19 +00:00
Sanjay Patel 686cf49f7a [InstCombine] update checks; NFC
llvm-svn: 286255
2016-11-08 18:06:14 +00:00
Tim Northover 5f7dea85c2 GlobalISel: support selecting fpext/fptrunc instructions on AArch64.
llvm-svn: 286253
2016-11-08 17:44:07 +00:00
Anton Korobeynikov 243a4700ce Fix PR27500: on MSP430 the branch destination offset is measured in words, not bytes.
Summary: In addition, the branch instructions will have proper BB destinations, not offsets, like before.

Reviewers: asl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23718

llvm-svn: 286252
2016-11-08 17:19:59 +00:00
Simon Pilgrim bdb3c38157 [X86][SSE] Regenerate test (just adds missing header)
llvm-svn: 286241
2016-11-08 15:42:49 +00:00
Simon Pilgrim 778596bf59 [TargetLowering] Fix undef vector element issue with true/false result handling
Fixed an issue with vector usage of TargetLowering::isConstTrueVal / TargetLowering::isConstFalseVal boolean result matching.

The comment said we shouldn't handle constant splat vectors with undef elements. But the the actual code was returning false if the build vector contained no undef elements....

This patch now ignores the number of undefs (getConstantSplatNode will return null if the build vector is all undefs).

The change has also unearthed a couple of missed opportunities in AVX512 comparison code that will need to be addressed.

Differential Revision: https://reviews.llvm.org/D26031

llvm-svn: 286238
2016-11-08 15:07:01 +00:00
Pablo Barrio 9f45254138 [JumpThreading] Unfold selects that depend on the same condition
Summary:
These are good candidates for jump threading. This enables later opts
(such as InstCombine) to combine instructions from the selects with
instructions out of the selects. SimplifyCFG will fold the select
again if unfolding wasn't worth it.

Patch by James Molloy and Pablo Barrio.

Reviewers: rengolin, haicheng, sebpop

Subscribers: jojo, jmolloy, llvm-commits

Differential Revision: https://reviews.llvm.org/D26391

llvm-svn: 286236
2016-11-08 14:53:30 +00:00
Simon Pilgrim d02c55204b [VectorLegalizer] Expansion of CTLZ using CTPOP when possible
This patch avoids scalarization of CTLZ by instead expanding to use CTPOP (ref: "Hacker's Delight") when the necessary operations are available.

This also adds the necessary cost models for X86 SSE2 targets (the main beneficiary) to ensure vectorization only happens when its useful.

Differential Revision: https://reviews.llvm.org/D25910

llvm-svn: 286233
2016-11-08 14:10:28 +00:00
Roger Ferrer Ibanez 80c0f33c29 [AArch64] Fix incorrect CSEL node created
Under -enable-unsafe-fp-math, SELECT_CC lowering in AArch64
transforms floating point comparisons of the form "a == 0.0 ? 0.0 : x" to
"a == 0.0 ? a : x". But it incorrectly assumes that 'x' and 'a' have
the same type which can lead to a wrong CSEL node that crashes later
due to nonsensical copies.

Differential Revision: https://reviews.llvm.org/D26394

llvm-svn: 286231
2016-11-08 13:34:41 +00:00
Simon Dardis e7cc54058d [mips] Renable small data section test.
llvm-svn: 286230
2016-11-08 13:03:45 +00:00
Craig Topper c6a0339fb0 [AVX-512] Add an avx512f without avx512vl command line to vec_fp_to_int.ll and regenerate. This will make a change in a future patch easier to see. NFC
llvm-svn: 286216
2016-11-08 06:58:53 +00:00
Tim Northover 9ac0eba672 GlobalISel: support selecting G_SELECT on AArch64.
llvm-svn: 286185
2016-11-08 00:45:29 +00:00
Tim Northover 7d88da6a46 GlobalISel: constrain PHI registers on AArch64.
Self-referencing PHI nodes need their destination operands to be constrained
because nothing else is likely to do so. For now we just pick a register class
naively.

Patch mostly by Ahmed again.

llvm-svn: 286183
2016-11-08 00:34:06 +00:00
Chad Rosier 583a307e17 [AArch64] Remove dead check prefixes after r286110. NFC.
llvm-svn: 286174
2016-11-07 23:13:59 +00:00
Chad Rosier d8447a7d30 [AArch64] Rename test to reflect changes after r286110. NFC.
llvm-svn: 286173
2016-11-07 23:13:55 +00:00
Stanislav Mekhanoshin 92e01ee90b [AMDGPU] Allow hoisting of comparisons out of a loop and eliminate condition copies
Codegen prepare sinks comparisons close to a user is we have only one register
for conditions. For AMDGPU we have many SGPRs capable to hold vector conditions.
Changed BE to report we have many condition registers. That way IR LICM pass
would hoist an invariant comparison out of a loop and codegen prepare will not
sink it.

With that done a condition is calculated in one block and used in another.
Current behavior is to store workitem's condition in a VGPR using v_cndmask
and then restore it with yet another v_cmp instruction from that v_cndmask's
result. To mitigate the issue a forward propagation of a v_cmp 64 bit result
to an user is implemented. Additional side effect of this is that we may
consume less VGPRs in a cost of more SGPRs in case if holding of multiple
conditions is needed, and that is a clear win in most cases.

llvm-svn: 286171
2016-11-07 23:04:50 +00:00
Adam Nemet b103fc52d3 [OptDiag, opt-viewer] Save callee's location and display as link
With this we get a new field in the YAML record if the value being
streamed out has a debug location.  For examples, please see the changes
to the tests.

This is then used in opt-viewer to display a link for the callee
function in the inlining remarks.

Differential Revision: https://reviews.llvm.org/D26366

llvm-svn: 286169
2016-11-07 22:41:13 +00:00
Sanjin Sijaric 6f020d91a1 [AArch64] Transfer memory operands when lowering vector load/store intrinsics
Summary:
Some vector loads and stores generated from AArch64 intrinsics alias each other
unnecessarily, preventing better scheduling.  We just need to transfer memory
operands during lowering.

Reviewers: mcrosier, t.p.northover, jmolloy

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D26313

llvm-svn: 286168
2016-11-07 22:39:02 +00:00
Derek Schuff 0d41b7b3f3 [WebAssembly] Emit a BasePointer when we have overly-aligned stack objects
Because we shift the stack pointer by an unknown amount, we need an
additional pointer. In the case where we have variable-size objects
as well, we can't reuse the frame pointer, thus three pointers.

Patch by Jacob Gravelle

Differential Revision: https://reviews.llvm.org/D26263

llvm-svn: 286160
2016-11-07 22:00:48 +00:00
Sanjoy Das e06ef141fc Avoid tail recursion elimination across calls with operand bundles
Summary:
In some specific scenarios with well understood operand bundle types
(like `"deopt"`) it may be possible to go ahead and convert recursion to
iteration, but TailRecursionElimination does not have that logic today
so avoid doing the right thing for now.

I need some input on whether `"funclet"` operand bundles should also
block tail recursion elimination.  If not, I'll allow TRE across calls
with `"funclet"` operand bundles and add a test case.

Reviewers: rnk, majnemer, nlewycky, ahatanak

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D26270

llvm-svn: 286147
2016-11-07 21:01:49 +00:00
Kuba Brecka 44e875ad5b [tsan] Cast floating-point types correctly when instrumenting atomic accesses, LLVM part
Although rare, atomic accesses to floating-point types seem to be valid, i.e. `%a = load atomic float ...`. The TSan instrumentation pass however tries to emit inttoptr, which is incorrect, we should use a bitcast here. Anyway, IRBuilder already has a convenient helper function for this.

Differential Revision: https://reviews.llvm.org/D26266

llvm-svn: 286135
2016-11-07 19:09:56 +00:00
Matt Arsenault f530e8b3f0 AMDGPU: Remove unnecessary and on conditional branch
The comment explaining why this was necessary is incorrect
in its description of v_cmp's behavior for inactive workitems.

llvm-svn: 286134
2016-11-07 19:09:33 +00:00
Matt Arsenault 52f14ec596 AMDGPU: Preserve vcc undef flags when inverting branch
If the branch was on a read-undef of vcc, passes that used
analyzeBranch to invert the branch condition wouldn't preserve
the undef flag resulting in a verifier error.

Fixes verifier failures in a future commit.

Also fix verifier error when inserting copy for vccz
corruption bug.

llvm-svn: 286133
2016-11-07 19:09:27 +00:00
Benjamin Kramer 1697d39eef [MemCpyOpt] Don't emit IR in an unspecified order
Argument evaluation order is one of the edge cases where Clang differs
from GCC, yielding different IR depending on which compiler LLVM was
built with. Make the order deterministic and tune the test to actually
verify the order instead of trying to hide it.

llvm-svn: 286126
2016-11-07 17:47:28 +00:00
Richard Smith 857efb0880 Add -O0 support for @llvm.invariant.group.barrier by discarding it if it gets to ISel.
Differential Revision: https://reviews.llvm.org/D26292

llvm-svn: 286119
2016-11-07 16:47:20 +00:00
Sanjay Patel 86408a8048 [InstCombine] allow splat vector folds in adjustMinMax() (retry r285732)
This was reverted at r285866 because there was a crash handling a scalar
select of vectors. I added a check for that pattern and a test case based
on the example provided in the post-commit thread for r285732.

llvm-svn: 286113
2016-11-07 15:52:45 +00:00
Amara Emerson 614b44bbe9 This patch adds support for 16 bit floating point registers to the inline asm register selection on AArch64.
Without this patch, register allocation for the example below fails.

define half @test(half %a1, half %a2) #0 {
entry:
  %0 = tail call half asm "sqrshl ${0:h}, ${1:h}, ${2:h}", "=w,w,w" (half %a1, half %a2) #1
  ret half %0
}

Patch by Florian Hahn.

Differential Revision: https://reviews.llvm.org/D25080

llvm-svn: 286111
2016-11-07 15:42:12 +00:00
Chad Rosier d6daac4746 [AArch64] Removed the narrow load merging code in the ld/st optimizer.
This feature has been disabled for some time now, so remove cruft.

Differential Revision: https://reviews.llvm.org/D26248

llvm-svn: 286110
2016-11-07 15:27:22 +00:00
Chad Rosier 8f348017b0 [AliasSetTracker] Make AST smarter about assume intrinsics that don't actually affect memory.
Differential Revision: https://reviews.llvm.org/D26252

llvm-svn: 286108
2016-11-07 14:11:45 +00:00
James Molloy b03e0879fc [Thumb1] Move padding earlier when synthesizing TBBs off of the PC
When the base register (register pointing to the jump table) is the PC, we expect the jump table to directly follow the jump sequence with no intervening padding.

If there is intervening padding, the calculated offsets will not be correct. One solution would be to account for any padding in the emitted LDRB instruction, but at the moment we don't support emitting MCExprs for the load offset.

In the meantime, it's correct and only a slight amount worse to just move the padding up, from just before the jump table to just before the jump instruction sequence. We can do that by emitting code alignment before the jump sequence, as we know the number of instructions in the sequence is always 4.

llvm-svn: 286107
2016-11-07 13:38:21 +00:00
Simon Pilgrim b56c731f18 [X86][AVX512] Add AVX512VL/AVX512BWVL vector truncation tests
llvm-svn: 286105
2016-11-07 13:34:29 +00:00
Simon Pilgrim 02666ac9c3 [X86][SSE] Drop unnecessary -mcpu argument from trunc tests
cpu/triple duplication

llvm-svn: 286104
2016-11-07 13:28:20 +00:00
Craig Topper b110e04851 [AVX-512] Remove masked pmovzx/pmovsx builtins and autoupgrade them to selects and native zext/sext.
This mostly reuses earlier autoupgrade support for the sse and avx equivalents. Just needed to add the code to add the select.

llvm-svn: 286092
2016-11-07 02:12:57 +00:00
Craig Topper 7e545335d6 [AVX-512] Remove 128/256 masked pshufb intrinsics. Autoupgrade them to legacy intrinsics and a select.
llvm-svn: 286089
2016-11-07 00:13:39 +00:00
Saleem Abdulrasool 804e12eeb5 ARM: lower fpowi appropriately for Windows ARM
This handles the last case of the builtin function calls that we would
generate code which differed from Microsoft's ABI.  Rather than
generating a call to `__pow{d,s}i2` we now promote the parameter to a
float or double and invoke `powf` or `pow` instead.

Addresses PR30825!

llvm-svn: 286082
2016-11-06 19:46:54 +00:00
Simon Pilgrim 39df78e384 [SelectionDAG] Add support for vector demandedelts in XOR opcodes
llvm-svn: 286075
2016-11-06 16:49:19 +00:00
Simon Pilgrim 3ac353cb51 [X86] Add knownbits vector xor test
In preparation for demandedelts support

llvm-svn: 286074
2016-11-06 16:36:29 +00:00
Craig Topper 46de41330c [AVX-512] Remove intrinsics for 128/256-bit masked variable shift. Instead upgrade them to a select and the older AVX2 intrinsic.
llvm-svn: 286073
2016-11-06 16:29:19 +00:00
Craig Topper af9b3fe752 [AVX-512] Remove intrinsics for 128/256-bit masked shift by immediate. Instead upgrade them to a select and the older SSE/AVX2 intrinsic.
llvm-svn: 286072
2016-11-06 16:29:14 +00:00
Simon Pilgrim dd4809a603 [SelectionDAG] Add support for vector demandedelts in OR opcodes
llvm-svn: 286071
2016-11-06 16:29:09 +00:00
Craig Topper c9467ed31e [AVX-512] Remove intrinsics for 128/256-bit masked shift by single element in xmm. Instead upgrade them to a select and the older SSE/AVX2 intrinsic.
llvm-svn: 286070
2016-11-06 16:29:08 +00:00
Craig Topper 1b468b4e3a [AVX-512] Remove a 512-bit test cases from the avx512vl test file. It already exists in the avx512f test file.
llvm-svn: 286069
2016-11-06 16:29:03 +00:00
Simon Pilgrim c104185580 [X86] Add knownbits vector or test
In preparation for demandedelts support

llvm-svn: 286068
2016-11-06 16:05:59 +00:00
Craig Topper 6b3e7b47d8 [X86] Add a few more fptoui test cases to the vec_fp_to_int.ll. The codegen for these test cases will be improved for AVX512 in a future commit.
llvm-svn: 286063
2016-11-06 07:50:25 +00:00
Craig Topper 5471fc29e4 [AVX-512] Add missing EVEX version of pattern for (v2f64 (extloadv2f32 addr:)) -> VCVTPS2PDZ128rm
llvm-svn: 286059
2016-11-06 04:12:52 +00:00
Craig Topper bd156195b0 [AVX-512] Add avx512vl command line to the fpext test and add -show-mc-encoding to show where we aren't using EVEX instructions.
llvm-svn: 286058
2016-11-06 04:12:49 +00:00
Craig Topper 1162857ec4 [AVX-512] Lower AVX cvtpd2ps intrinsic to ISD::FP_ROUND so it can use EVEX instruction when available.
llvm-svn: 286057
2016-11-06 04:12:46 +00:00
Craig Topper 9a4a3af5dd [AVX-512] Lower SSE/AVX cvtdq2ps intrinsics directly to ISD::SINT_TO_FP so they can use EVEX instructions when available.
llvm-svn: 286056
2016-11-06 04:12:42 +00:00
Craig Topper a4a51f1afe [AVX-512] Add -show-mc-encoding to legacy vector intrinsic tests so we can see when VEX or EVEX encoded instructions are being emitted. Make sure the tests all have an avx2 command line and an skx command line.
llvm-svn: 286055
2016-11-06 02:03:58 +00:00
Justin Lebar 54b0be048e [LoopStrengthReduce] Don't use a DenseSet<int64_t> when we might add any valid int64_t to the set.
Summary:
SmallSetVector uses DenseSet, but that means we need to reserve some
values for the empty and tombstone keys.

It seems to me we should have a general way to let us store full-range
ints inside of DenseSets, and furthermore that we probably shouldn't
silently let you add ints into DenseSets without explicitly promising
that they're in range.  But that's a battle for another day; for now,
just fix this code, since we currently do something Very Bad when
compiling ffmpeg.

Fixes PR30914.

Reviewers: jeremyhu

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D26323

llvm-svn: 286038
2016-11-05 16:47:25 +00:00
Krzysztof Parzyszek b7eb7fc892 [Hexagon] Account for <def,read-undef> when validating moves for predication
llvm-svn: 286009
2016-11-04 20:41:03 +00:00
Weiming Zhao 6100118a52 Fix 24560: assembler does not share constant pool for same constants
Summary: This patch returns the same label if the CP entry with the same value has been created.

Reviewers: eli.friedman, rengolin, jmolloy

Subscribers: majnemer, jmolloy, llvm-commits

Differential Revision: https://reviews.llvm.org/D25804

llvm-svn: 286006
2016-11-04 19:17:32 +00:00
NAKAMURA Takumi b4eef1fa4a llvm/test/Transforms/DCE/calls-errno.ll: Suppress checking @pow(+0,-1).
It depends on host's pow(3), and mingw's pow doesn't raise any errors, just returns +INF.

llvm-svn: 286005
2016-11-04 18:50:45 +00:00
Zvi Rackover 85bc64c734 [X86] Broadcast from memory intructions aren't unfoldable
Broadcast from memory instructions should be treated as moves. They can't be unfolded.

Fixes pr30693.

llvm-svn: 285998
2016-11-04 15:15:19 +00:00
Zvi Rackover 1522b33195 Add bugpoint-reduced reproducer for pr30693
llvm-svn: 285997
2016-11-04 14:53:22 +00:00
Tom Stellard 2d2d33f1dc Revert "AMDGPU: Add VI i16 support"
This reverts commit r285939 and r285948.  These broke some conformance tests.

llvm-svn: 285995
2016-11-04 13:06:34 +00:00
Weiming Zhao 962eaaea9c [Cortex-M0] Atomic lowering
Summary: ARMv6m supports dmb etc fench instructions but not ldrex/strex etc. So for some atomic load/store, LLVM should inline instructions instead of lowering to __sync_ calls.

Reviewers: rengolin, efriedma, t.p.northover, jmolloy

Subscribers: efriedma, aemerson, llvm-commits

Differential Revision: https://reviews.llvm.org/D26120

llvm-svn: 285969
2016-11-03 21:49:08 +00:00
Kevin Enderby 7747cb55dc Add support for the ARM_THREAD_STATE64 and
in llvm-objdump for Mach-O files add the printing of the
ARM_THREAD_STATE64 in the same format as
otool-classic(1) on darwin.

To do this the 64-bit ARM general tread state
needed to be defined in include/llvm/Support/MachO.h .

rdar://28985800

llvm-svn: 285967
2016-11-03 20:51:28 +00:00
Adrian Prantl dbfda63695 Add DWARF debug info support for C++11 inline namespaces.
This implements the DWARF 5 DW_AT_export_symbols feature:
http://dwarfstd.org/ShowIssue.php?issue=141212.1

<rdar://problem/18616046>

llvm-svn: 285959
2016-11-03 19:42:02 +00:00
Rafael Espindola ed1395a792 Add error handling to getEntry.
Issue found by inspection.

llvm-svn: 285951
2016-11-03 18:05:33 +00:00
Rafael Espindola 6a4949756a Replace a report_fatal_error with an ErrorOr.
llvm-svn: 285942
2016-11-03 17:28:33 +00:00
Tom Stellard 2b3379cdff AMDGPU: Add VI i16 support
Patch By: Wei Ding

Differential Revision: https://reviews.llvm.org/D18049

llvm-svn: 285939
2016-11-03 17:13:50 +00:00
Davide Italiano a1f241d1c3 Make this test Windows-only (try to placate buildbots).
llvm-svn: 285931
2016-11-03 16:43:10 +00:00
Alexander Timofeev f867a40bf6 [AMDGPU][CodeGen] To improve CGEMM performance: combine LDS reads.
hange explores the fact that LDS reads may be reordered even if access
the same location.

Prior the change, algorithm immediately stops as soon as any memory
access encountered between loads that are expected to be merged
together. Although, Read-After-Read conflict cannot affect execution
correctness.

Improves hcBLAS CGEMM manually loop-unrolled kernels performance by 44%.
Also improvement expected on any massive sequences of reads from LDS.

Differential Revision: https://reviews.llvm.org/D25944

llvm-svn: 285919
2016-11-03 14:37:13 +00:00
James Molloy e7d97368f2 Revert "[Thumb] Teach ISel how to lower compares of AND bitmasks efficiently"
This reverts commit r285893. It caused (probably) http://lab.llvm.org:8011/builders/clang-cmake-thumbv7-a15-full-sh/builds/83 .

llvm-svn: 285912
2016-11-03 14:08:01 +00:00
Rafael Espindola 7b2750afa5 replace a report_fatal_error with a ErrorOr.
llvm-svn: 285910
2016-11-03 13:58:15 +00:00
James Molloy b60d8b1987 [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
This recommits r281323, which was backed out for two reasons. One, a selfhost failure, and two, it apparently caused Chromium failures. Actually, the latter was a red herring. The log has expired from the former, but I suspect that was a red herring too (actually caused by another problematic patch of mine). Therefore reapplying, and will watch the bots like a hawk.

For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win.

llvm-svn: 285893
2016-11-03 10:18:20 +00:00
Craig Topper 7b9cc1474f [AVX-512] Use 'vnot' instead of 'not' in patterns involving vXi1 vectors.
This fixes selection of KANDN instructions and allows us to remove an extra set of patterns for KNOT and KXNOR.

Reviewers: delena, igorb

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D26134

llvm-svn: 285878
2016-11-03 06:04:28 +00:00
Elena Demikhovsky caaceef4b3 Expandload and Compressstore intrinsics
2 new intrinsics covering AVX-512 compress/expand functionality.
This implementation includes syntax, DAG builder, operation lowering and tests.
Does not include: handling of illegal data types, codegen prepare pass and the cost model.

llvm-svn: 285876
2016-11-03 03:23:55 +00:00
Teresa Johnson 0515fb8d4b [ThinLTO] Handle distributed backend case when doing renaming
Summary:
The recent change I made to consult the summary when deciding whether to
rename (to handle inline asm) in r285513 broke the distributed build
case. In a distributed backend we will only have a portion of the
combined index, specifically for imported modules we only have the
summaries for any imported definitions. When renaming on import we were
asserting because no summary entry was found for a local reference being
linked in (def wasn't imported).

We only need to consult the summary for a renaming decision for the
exporting module. For imports, we would have prevented importing any
references to NoRename values already.

Reviewers: mehdi_amini

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D26250

llvm-svn: 285871
2016-11-03 01:07:16 +00:00
Greg Bedwell 5fc6f94591 Revert "[InstCombine] allow splat vector folds in adjustMinMax()"
This reverts commit r285732.

This change introduced a new assertion failure in the following
testcase at -O2:

typedef short __v8hi __attribute__((__vector_size__(16)));
__v8hi foo(__v8hi &V1, __v8hi &V2, unsigned mask) {
  __v8hi Result = V1;
  if (mask & 0x80)
    Result[0] = V2[0];
  return Result;
}

llvm-svn: 285866
2016-11-02 23:17:05 +00:00
Adrian McCarthy 4333daab1c Emit S_COMPILE3 record once per TU rather than once per function
This has some ripple effects in several tests.

llvm-svn: 285862
2016-11-02 21:30:35 +00:00
Kevin Enderby fbebe1632a Add the rest of the additional error checks for invalid Mach-O files when
the offsets and sizes of an element of the Mach-O file overlaps with
another element in the Mach-O file.

Some other tests for malformed Mach-O files now run into these
checks so their tests were also adjusted.

llvm-svn: 285860
2016-11-02 21:08:39 +00:00
Davide Italiano 807c699bbb [RuntimeDyld] Move an X86 only test to the correct directory.
This is an attempt to placate the bots after r285841.

llvm-svn: 285859
2016-11-02 21:05:42 +00:00
Eli Friedman b6befc3bc4 DCE math library calls with a constant operand.
On platforms which use -fmath-errno, math libcalls without any uses
require some extra checks to figure out if they are actually dead.

Fixes https://llvm.org/bugs/show_bug.cgi?id=30464 .

Differential Revision: https://reviews.llvm.org/D25970

llvm-svn: 285857
2016-11-02 20:48:11 +00:00
Vedant Kumar 5a0e92b04c [llvm-cov] Turn line numbers in html reports into clickable links
llvm-svn: 285853
2016-11-02 19:44:13 +00:00
Krzysztof Parzyszek ead77016d8 [Hexagon] Remove registers coalesced in expand-condsets from live intervals
llvm-svn: 285846
2016-11-02 17:59:54 +00:00
Artem Tamazov e8bb4bcafc [AMDGPU][mc] Improve test of special asm symbols.
Test simplified. Coverage extended.

Differential Revision: https://reviews.llvm.org/D26198

llvm-svn: 285844
2016-11-02 17:45:58 +00:00
Davide Italiano 6b2bba14a9 [lli/COFF] Set the correct alignment for common symbols
Otherwise we set it always to zero, which is not correct,
and we assert inside alignTo (Assertion failed:
Align != 0u && "Align can't be 0.").

Differential Revision:  https://reviews.llvm.org/D26173

llvm-svn: 285841
2016-11-02 17:32:19 +00:00
Matt Arsenault bf9ee26aea AMDGPU: Cleanup some xfailed tests
Some of these are already fixed or tested somewhere else.

llvm-svn: 285840
2016-11-02 17:24:54 +00:00
Zachary Turner 7251ede7c5 Add CodeViewRecordIO for reading and writing.
Using a pattern similar to that of YamlIO, this allows
us to have a single codepath for translating codeview
records to and from serialized byte streams.  The
current patch only hooks this up to the reading of
CodeView type records.  A subsequent patch will hook
it up for writing of CodeView type records, and then a
third patch will hook up the reading and writing of
CodeView symbols.

Differential Revision: https://reviews.llvm.org/D26040

llvm-svn: 285836
2016-11-02 17:05:19 +00:00
Nicolai Haehnle 368972c3b3 AMDGPU: Allow additional implicit operands on MOVRELS instructions
Summary:
The post-RA scheduler occasionally uses additional implicit operands when
the vector implicit operand as a whole is killed, but some subregisters
are still live because they are directly referenced later. Unfortunately,
this seems incredibly subtle to reproduce.

Fixes piglit spec/glsl-110/execution/variable-indexing/vs-temp-array-mat2-index-wr.shader_test
and others.

Reviewers: arsenm, tstellarAMD

Subscribers: kzhuravl, wdng, yaxunl, tony-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D25656

llvm-svn: 285835
2016-11-02 17:03:11 +00:00
Nirav Dave 0a392a8e7f [ARM][MC] Cleanup ARM Target Assembly Parser
Summary:
Correctly parse end-of-statement tokens and handle preprocessor
end-of-line comments in ARM assembly processor.

Reviewers: rnk, majnemer

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D26152

llvm-svn: 285830
2016-11-02 16:22:51 +00:00
Matt Arsenault 44deb7914e BranchRelaxation: Fix computing indirect branch block size
llvm-svn: 285828
2016-11-02 16:18:29 +00:00
Adrian Prantl 56527dc804 Emit DW_OP_piece also if the previous value was a constant.
This fixes a bug in the DWARF backend.

llvm-svn: 285826
2016-11-02 16:12:16 +00:00
Rafael Espindola 25be8c8856 Avoid a report_fatal_error in sections().
Have it return a ErrorOr<Range> and delete section_begin and
section_end.

llvm-svn: 285807
2016-11-02 14:10:57 +00:00
Bjorn Pettersson 7424c8ccd1 [Reassociate] Skip analysis of dead code to avoid infinite loop.
Summary:
It was detected that the reassociate pass could enter an inifite
loop when analysing dead code. Simply skipping to analyse basic
blocks that are dead avoids such problems (and as a side effect
we avoid spending time on optimising dead code).

The solution is using the same Reverse Post Order ordering of the
basic blocks when doing the optimisations, as when building the
precalculated rank map. A nice side-effect of this solution is
that we now know that we only try to do optimisations for blocks
with ranked instructions.

Fixes https://llvm.org/bugs/show_bug.cgi?id=30818

Reviewers: llvm-commits, davide, eli.friedman, mehdi_amini

Subscribers: dberlin

Differential Revision: https://reviews.llvm.org/D26154

llvm-svn: 285793
2016-11-02 08:55:19 +00:00
Peter Collingbourne ff2c2ec6b2 Bitcode: Check file size before reading bitcode header.
Should unbreak ocaml binding tests.

Also added an llvm-dis test that checks for the same thing.

llvm-svn: 285777
2016-11-02 00:39:11 +00:00
Peter Collingbourne 028eb5a3f8 Bitcode: Change reader interface to take memory buffers.
As proposed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-October/106595.html

This change also fixes an API oddity where BitstreamCursor::Read() would
return zero for the first read past the end of the bitstream, but would
report_fatal_error for subsequent reads. Now we always report_fatal_error
for all reads past the end. Updated clients to check for the end of the
bitstream before reading from it.

I also needed to add padding to the invalid bitcode tests in
test/Bitcode/. This is because the streaming interface was not checking that
the file size is a multiple of 4.

Differential Revision: https://reviews.llvm.org/D26219

llvm-svn: 285773
2016-11-02 00:08:19 +00:00
Matt Arsenault 663ab8c119 AMDGPU: Use brev for materializing SGPR constants
This is already done with VGPR immediates and saves 4 bytes.

llvm-svn: 285765
2016-11-01 23:14:20 +00:00
Matt Arsenault 3d463193a9 AMDGPU: Default to using scalar mov to materialize immediate
This is the conservatively correct way because it's easy to
move or replace a scalar immediate. This was incorrect in the case
when the register class wasn't known from the static instruction
definition, but still needed to be an SGPR. The main example of this
is inlineasm has an SGPR constraint.

Also start verifying the register classes of inlineasm operands.

llvm-svn: 285762
2016-11-01 22:55:07 +00:00
Rafael Espindola 7909e22c7c Don't compute DotShstrtab eagerly.
This saves a field that is not always used. It also avoids failing a
program that doesn't need the section names.

llvm-svn: 285753
2016-11-01 21:33:55 +00:00
Rafael Espindola 120dca3b63 Use the existing std::error_code out parameter.
This avoids calling exit with a partially constructed object.

llvm-svn: 285738
2016-11-01 20:24:22 +00:00
Sanjay Patel c3d89842ad [InstCombine] allow splat vector folds in adjustMinMax()
llvm-svn: 285732
2016-11-01 20:08:02 +00:00
Sanjay Patel c0339c77ef [InstCombine] Fold nuw left-shifts in `ugt`/`ule` comparisons.
This transforms

%a = shl nuw %x, c1
%b = icmp {ugt|ule} %a, c0

into

%b = icmp {ugt|ule} %x, (c0 >> c1)

z3:

(declare-const x (_ BitVec 64))
(declare-const c0 (_ BitVec 64))
(declare-const c1 (_ BitVec 64))

(push)
(assert (= x (bvlshr (bvshl x c1) c1)))  ; nuw
(assert (not (= (bvugt (bvshl x c1) c0)
                (bvugt x
                       (bvlshr c0 c1)))))
(check-sat)
(get-model)
(pop)

(push)
(assert (= x (bvlshr (bvshl x c1) c1)))  ; nuw
(assert (not (= (bvule (bvshl x c1) c0)
                (bvule x
                       (bvlshr c0 c1)))))
(check-sat)
(get-model)
(pop)

Patch by bryant!

Differential Revision: https://reviews.llvm.org/D25913

llvm-svn: 285729
2016-11-01 19:19:29 +00:00
Konstantin Zhuravlyov d971a1123f [AMDGPU] Check if type transforms to i16 (VI+) when getting AMDGPUISD::FFBH_U32
This will prevent following regression when enabling i16 support (D18049):

test/CodeGen/AMDGPU/ctlz.ll
test/CodeGen/AMDGPU/ctlz_zero_undef.ll

Differential Revision: https://reviews.llvm.org/D25802

llvm-svn: 285716
2016-11-01 17:49:33 +00:00
Sanjay Patel 700193ae05 [InstCombine] add vector tests for ext+adjust min/max
llvm-svn: 285713
2016-11-01 17:34:29 +00:00
Sanjay Patel 586aeee341 [InstCombine] move/fix tests for adjusted min/max
I think the former 'test50' had a typo making it functionally equivalent
to the former 'test49'; changed the predicate to provide more coverage.

llvm-svn: 285706
2016-11-01 16:39:30 +00:00
Tom Stellard 94c21bc088 AMDGPU: Implement expansion of f16 = FP_TO_FP16 f64
I wanted to implement this as a target independent expansion, however when
targets say they want to expand FP_TO_FP16 what they actually want is
the unsafe math expansion when possible and expansion to a libcall in all
other cases.

The only way to make this work as a target independent would be to add logic
to target's TargetLowering construction to mark theses nodes as Expand when
LegalizeDAG can use the unsafe expansion and mark them as LibCall when it
cannot.  I think this would be possible, but I think it would be too fragile
and complex as it would require targets to keep their expansion logic up
to date with the code in LegalizeDAG.

Reviewers: bogner, ab, t.p.northover, arsenm

Subscribers: wdng, llvm-commits, nhaehnle

Differential Revision: https://reviews.llvm.org/D25999

llvm-svn: 285704
2016-11-01 16:31:48 +00:00
Sjoerd Meijer b187f5d988 This is a 1 character fix for an ARM build attribute test (r284571): the
purpose of the test was to have 2 different function attribute sets, but due
to a typo there was only one both with number #0.

llvm-svn: 285701
2016-11-01 15:59:37 +00:00
Sanjay Patel 04ef115ff7 [InstCombine] fix tests for adjusted min/max
1. Delete identical tests
2. Rename tests to reflect actual functionality
3. Add comments
4. Add unsigned variants
5. Add vector variants with FIXME comments
6. Rename test file

llvm-svn: 285699
2016-11-01 15:48:30 +00:00
Simon Pilgrim 6dd8fab443 [InstCombine] Folding of shifts by the sum of positive values
This patch introduces the combine:

(C1 shift (A add C2)) -> ((C1 shift C2) shift A)
iff A and C2 are both positive

If both A and C2 are know to be positive then we can safely split into 2 shifts, permitting the folding of the Inner shift.

Fix for the spec benchmark case mentioned by @nadav on PR15141 (assuming we can prove that the inputs as positive).

Differential Revision: https://reviews.llvm.org/D26000

llvm-svn: 285696
2016-11-01 15:40:30 +00:00
Sanjay Patel 69587324e8 [InstCombine] auto-generate better checks
llvm-svn: 285693
2016-11-01 14:38:30 +00:00
Chris Dewhurst 30de1de144 [Sparc][LEON] Test for FixFDIVSQRT erratum fix.
Note: Test is per differential review, but the other changed code in the review was for an optimisation that din't quite work. Nevertheless, the test is valid for the unoptimised version of the fix.

Differential Review: https://reviews.llvm.org/D24658

llvm-svn: 285692
2016-11-01 14:23:37 +00:00
James Molloy 70a3d6df52 [Thumb-1] Synthesize TBB/TBH instructions to make use of compressed jump tables
[Reapplying r284580 and r285917 with fix and testing to ensure emitted jump tables for Thumb-1 have 4-byte alignment]

The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions.

It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size.

TBB example:
Before: lsls r0, r0, #2    After: add  r0, pc
        adr  r1, .LJTI0_0         ldrb r0, [r0, #6]
        ldr  r0, [r0, r1]         lsls r0, r0, #1
        mov  pc, r0               add  pc, r0
  => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4.

The only case that can increase dynamic instruction count is the TBH case:

Before: lsls r0, r4, #2    After: lsls r4, r4, #1
        adr  r1, .LJTI0_0         add  r4, pc
        ldr  r0, [r0, r1]         ldrh r4, [r4, #6]
        mov  pc, r0               lsls r4, r4, #1
                                  add  pc, r4
  => 1 more instruction in prologue. Jump table shrunk by a factor of 2.

So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!)

llvm-svn: 285690
2016-11-01 13:37:41 +00:00
Valery Pykhtin 8a89d3662a [AMDGPU] Expand vector mulhu/mulhs
Differential revision: https://reviews.llvm.org/D26077

llvm-svn: 285684
2016-11-01 10:26:48 +00:00
Nemanja Ivanovic e70fa63390 [PowerPC] Implement vector shift builtins - llvm portion
This patch corresponds to review https://reviews.llvm.org/D26095.
Committing on behalf of Tony Jiang.

llvm-svn: 285681
2016-11-01 09:42:32 +00:00
Sanjay Patel 70c5f02d25 [DAG] disable nsw/nuw for add/sub/mul when simplifying based on demanded bits (PR30841)
This bug was exposed by using nsw/nuw for more aggressive folds in:
https://reviews.llvm.org/rL284844

The changes mimic the IR demanded bits logic in InstCombiner::SimplifyDemandedUseBits(),
but we can't just flip flag bits in the DAG; we have to create a new node that has the
bits cleared.

This should fix:
https://llvm.org/bugs/show_bug.cgi?id=30841 

llvm-svn: 285656
2016-10-31 23:28:45 +00:00
Saleem Abdulrasool e1aa782bd0 CodeGen: further loosen -O0 CG for WoA division
Generate the slowest possible codepath for noopt CodeGen.  Even trying to be
clever with the negated jump can cause out-of-range jumps.  Use a wide branch
instead. Although the code is modelled simplistically, the later optimizations
would recombine the branching into `cbz` if possible.  This re-enables the
previous optimization as well as hopefully gives us working code in all cases.

Addresses PR30356!

llvm-svn: 285649
2016-10-31 22:12:37 +00:00
Teresa Johnson 002af9bbce [ThinLTO] Disable importing and other cross-module optis at -O0
Summary:
There is no point to importing at -O0, since we won't inline. We should
also disable other cross-module optimizations.

(Plan to backport this fix to the 3.9 branch to fix PR30774)

Reviewers: pcc

Subscribers: johanengelen, mehdi_amini

Differential Revision: https://reviews.llvm.org/D25918

llvm-svn: 285648
2016-10-31 22:12:21 +00:00
Justin Lebar ed1e312f05 [NVPTX] Remove NVPTXFavorNonGenericAddrSpaces pass.
Summary:
This has been replaced by the NVPTXInferAddressSpaces pass.  We've had
the new one as the default with the old one accessible via a flag for
some months now, and we've had no problems.

Reviewers: tra

Subscribers: llvm-commits, jholewinski, jingyue, mgorny

Differential Revision: https://reviews.llvm.org/D26165

llvm-svn: 285642
2016-10-31 21:51:42 +00:00
Kevin Enderby d503940e8f More additional error checks for invalid Mach-O files when
the offsets and sizes of an element of the file overlaps with
another element in the Mach-O file.

This shows the approach to this testing for three elements
and contains for tests for their overlap.  Checking for all the
remain elements will be added next.

llvm-svn: 285632
2016-10-31 20:29:48 +00:00
Nemanja Ivanovic 60bdfe5a7c [PPC] add absolute difference altivec instructions and matching intrinsics
This patch corresponds to review https://reviews.llvm.org/D26072.
Committing on behalf of Sean Fertile.

llvm-svn: 285627
2016-10-31 19:47:52 +00:00
Victor Leschuk e1156c2eb0 DebugInfo: make DW_TAG_atomic_type valid
DW_TAG_atomic_type was already included in Dwarf.defs and emitted correctly,
however Verifier didn't recognize it as valid.
Thus we introduce the following changes:

  * Make DW_TAG_atomic_type valid tag for IR and DWARF (enabled only with -gdwarf-5)
  * Add it to related docs
  * Add DebugInfo tests

Differential Revision: https://reviews.llvm.org/D26144

llvm-svn: 285624
2016-10-31 19:09:38 +00:00
Kuba Brecka a28c9e8f09 [asan] Move instrumented null-terminated strings to a special section, LLVM part
On Darwin, simple C null-terminated constant strings normally end up in the __TEXT,__cstring section of the resulting Mach-O binary. When instrumented with ASan, these strings are transformed in a way that they cannot be in __cstring (the linker unifies the content of this section and strips extra NUL bytes, which would break instrumentation), and are put into a generic __const section. This breaks some of the tools that we have: Some tools need to scan all C null-terminated strings in Mach-O binaries, and scanning all the contents of __const has a large performance penalty. This patch instead introduces a special section, __asan_cstring which will now hold the instrumented null-terminated strings.

Differential Revision: https://reviews.llvm.org/D25026

llvm-svn: 285619
2016-10-31 18:51:58 +00:00
Nirav Dave a9395af51d [MC] Make llvm-mc fail cleanly on invalid output asm variant.
Fixes PR28488.

Reviewers: rnk, majnemer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25834

llvm-svn: 285616
2016-10-31 18:36:31 +00:00
Tim Northover 037af52c8b GlobalISel: allow truncating pointer casts on AArch64.
llvm-svn: 285615
2016-10-31 18:31:09 +00:00