Commit Graph

96047 Commits

Author SHA1 Message Date
Benjamin Kramer 1e425c9f24 [InterleavedAccessPass] Remove global variable.
This is a threading hazard and rightfully complained about by tsan. No
functionality change.

llvm-svn: 284515
2016-10-18 18:59:58 +00:00
Kostya Serebryany bb59ef77ca [libFuzzer] detect leaks after every run when executing fixed inputs (./fuzzer -runs=1000000 my-file)
llvm-svn: 284514
2016-10-18 18:38:08 +00:00
Sanjay Patel 19601fa587 revert r284495: [Target] remove TargetRecip class
There's something wrong with the StringRef usage while parsing the attribute string.

llvm-svn: 284513
2016-10-18 18:36:49 +00:00
Kostya Serebryany 8dfed45cd4 [libFuzzer] reshuffle the code for -exit_on_src_pos and -exit_on_item
llvm-svn: 284508
2016-10-18 18:06:05 +00:00
Vitaly Buka 8e1906ea7e [asan] Make -asan-experimental-poisoning the only behavior
Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25735

llvm-svn: 284505
2016-10-18 18:04:59 +00:00
Kevin Enderby 6f69582e9b Next set of additional error checks for invalid Mach-O files for the
load commands that use the MachO::routines_command and
and MachO::routines_command_64 types but are not used in llvm
libObject code but used in llvm tool code.

This includes the LC_ROUTINES and LC_ROUTINES_64
load commands.

llvm-svn: 284504
2016-10-18 17:54:17 +00:00
Sanjoy Das 507dd40a4a [SCEV] Make CompareValueComplexity a little bit smarter
This helps canonicalization in some cases.

Thanks to Pankaj Chawla for the investigation and the test case!

llvm-svn: 284501
2016-10-18 17:45:16 +00:00
Sanjoy Das 9cd877a25a [SCEV] Extract out a helper function; NFC
llvm-svn: 284500
2016-10-18 17:45:13 +00:00
Sanjay Patel 08fff9ca81 [Target] remove TargetRecip class; move reciprocal estimate isel functionality to TargetLowering
This is a follow-up to D24816 - where we changed reciprocal estimates to be function attributes
rather than TargetOptions.

This patch is intended to be a structural, but not functional change. By moving all of the
TargetRecip functionality into TargetLowering, we can remove all of the reciprocal estimate
state, shield the callers from the string format implementation, and simplify/localize the
logic needed for a target to enable this.

If a function has a "reciprocal-estimates" attribute, those settings may override the target's
default reciprocal preferences for whatever operation and data type we're trying to optimize.
If there's no attribute string or specific setting for the op/type pair, just use the target
default settings.

As noted earlier, a better solution would be to move the reciprocal estimate settings to IR
instructions and SDNodes rather than function attributes, but that's a multi-step job that
requires infrastructure improvements. I intend to work on that, but it's not clear how long
it will take to get all the pieces in place.

Differential Revision: https://reviews.llvm.org/D25440

llvm-svn: 284495
2016-10-18 17:05:05 +00:00
Simon Pilgrim 25e9628978 [DAGCombiner] Add splatted vector support to (udiv x, (shl pow2, y)) -> x >>u (log2(pow2)+y)
llvm-svn: 284491
2016-10-18 16:36:00 +00:00
Simon Pilgrim ca3072ac58 [X86][AVX512] Add mask/maskz writemask support to constant pool shuffle decode commentx
llvm-svn: 284488
2016-10-18 15:45:37 +00:00
Simon Dardis 858915f054 [mips][ias] Handle more complicated expressions for memory operands
This patch teaches ias for mips to handle expressions such as
(8*4)+(8*31)($sp). Such expression typically occur from the expansion
of multiple macro definitions.

This partially resolves PR/30383.

Thanks to Sean Bruno for reporting the issue!

Reviewers: zoran.jovanovic, vkalintiris

Differential Revision: https://reviews.llvm.org/D24667

llvm-svn: 284485
2016-10-18 15:17:17 +00:00
Simon Dardis c4463c942c [mips] Fix sync instruction definition
The 'sync' instruction for MIPS was defined in MIPS-II as taking no operands.
MIPS32 extended the define of 'sync' as taking an optional unsigned 5 bit
immediate.

This patch correct the definition of sync so that it is accepted with an
operand of 0 or no operand for MIPS-II to MIPS-V, and a 5 bit unsigned
immediate for MIPS32 and later revisions.

Additionally a clear error is given when the MIPS32 version of sync is
used when targeting pre MIPS32.

This partially resolves PR/30714.

Thanks to Daniel Sanders for reporting this issue!

Reveiwers: vkalintiris

Differential Revision: https://reviews.llvm.org/D25672

llvm-svn: 284483
2016-10-18 14:42:13 +00:00
Victor Leschuk 197aa3192d DebugInfo: change alignment type from uint64_t to uint32_t to save space.
In futher patches we shall have alignment field added to DIVariable family
and switching from uint64_t to uint32_t will save 4 bytes per variable.

Differential Revision: https://reviews.llvm.org/D25620

llvm-svn: 284482
2016-10-18 14:31:22 +00:00
Simon Dardis aff4d141b9 [mips] Macro expansion for ld, sd for O32
ld and sd when assembled for the O32 ABI expand to a pair of 32 bit word loads
or stores using the specified source or destination register and the next
register.

This patch does not add support for the cases where the offset is greater than
a 16 bit signed immediate as that would lead to a wrong/misleading error
message as the assembler would report "instruction requires a CPU feature
not currently enabled" for ld & sd for MIPS64 when their offset is not a signed
16 bit number.

This fixes PR/29159.

Thanks to Sean Bruno for reporting this issue!

Reviewers: vkalintiris, seanbruno, zoran.jovanovic

Differential Review: https://reviews.llvm.org/D24556

llvm-svn: 284481
2016-10-18 14:28:00 +00:00
Michael Zuckerman 1bee6340ef [x86][inline-asm][avx512] allow swapping of '{k<num>}' & '{z}' marks
Committing on behalf of Coby Tayree: After check-all and LGTM

Desc:

AVX512 allows dest operand to be followed by an op-mask register specifier ('{k<num>}', which in turn may be followed by a merging/zeroing specifier ('{z}')
 Currently, the following forms are allowed:
 {k<num>}
 {k<num>}{z}

This patch allows the following forms:
 {z}{k<num>}

and ignores the next form:
 {z}

Justification would be quite simple - GCC

Differential Revision: http://reviews.llvm.org/D25013

llvm-svn: 284479
2016-10-18 13:52:39 +00:00
Simon Pilgrim 65e0c73875 Strip trailing whitespace (NFCI)
llvm-svn: 284478
2016-10-18 13:44:00 +00:00
Vasileios Kalintiris 3955b75ba9 [mips][FastISel] Instantiate the MipsFastISel class only for targets that support FastISel.
Summary:
Instead of instantiating the MipsFastISel class and checking if the
target is supported in the overriden methods, we should perform that
check before creating the class. This allows us to enable FastISel *only*
for targets that truly support it, ie. MIPS32 to MIPS32R5.

Reviewers: sdardis

Subscribers: ehostunreach, llvm-commits

Differential Revision: https://reviews.llvm.org/D24824

llvm-svn: 284475
2016-10-18 13:05:42 +00:00
John Brawn ecf79300dd [SCEV] More accurate calculation of max backedge count of some less-than loops
In loops that look something like
 i = n;
 do {
  ...
 } while(i++ < n+k);
where k is a constant, the maximum backedge count is k (in fact the backedge
count will be either 0 or k, depending on whether n+k wraps). More generally
for LHS < RHS if RHS-(LHS of first comparison) is a constant then the loop will
iterate either 0 or that constant number of times.

This allows for more loop unrolling with the recent upper bound loop unrolling
changes, and I'm working on a patch that will let loop unrolling additionally
make use of the loop being executed either 0 or k times (we need to retain the
loop comparison only on the first unrolled iteration).

Differential Revision: https://reviews.llvm.org/D25607

llvm-svn: 284465
2016-10-18 10:10:53 +00:00
Renato Golin 9ce5074d29 Revert "Resubmit "Add support for advanced number formatting.""
This reverts commits 284436 and 284437 because they still break AArch64 bots:

Value of: format_number(-10, IntegerStyle::Integer, 1)
  Actual: "-0"
  Expected: "-10"

llvm-svn: 284462
2016-10-18 09:30:18 +00:00
Javed Absar e7c338081a [ARM] Assign cost of scaling for Cortex-R52
This patch assigns cost of the scaling used in addressing for Cortex-R52.

On Cortex-R52 a negated register offset takes longer than a non-negated
register offset, in a register-offset addressing mode.

Differential Revision: http://reviews.llvm.org/D25670

Reviewer: jmolloy
llvm-svn: 284460
2016-10-18 09:08:54 +00:00
Simon Pilgrim 4ddc92b6cd [X86][SSE] Add lowering to cvttpd2dq/cvttps2dq for sitofp v2f64/2f32 to 2i32
As discussed on PR28461 we currently miss the chance to lower "fptosi <2 x double> %arg to <2 x i32>" to cvttpd2dq due to its use of illegal types.

This patch adds support for fptosi to 2i32 from both 2f64 and 2f32.

It also recognises that cvttpd2dq zeroes the upper 64-bits of the xmm result (similar to D23797) - we still don't do this for the cvttpd2dq/cvttps2dq intrinsics - this can be done in a future patch.

Differential Revision: https://reviews.llvm.org/D23808

llvm-svn: 284459
2016-10-18 07:42:15 +00:00
Dean Michael Berris 156f6cafc2 [XRay] Support for for tail calls for ARM no-Thumb
This patch adds simplified support for tail calls on ARM with XRay instrumentation.

Known issue: compiled with generic flags: `-O3 -g -fxray-instrument -Wall
-std=c++14  -ffunction-sections -fdata-sections` (this list doesn't include my
specific flags like --target=armv7-linux-gnueabihf etc.), the following program

    #include <cstdio>
    #include <cassert>
    #include <xray/xray_interface.h>

    [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fC() {
      std::printf("In fC()\n");
    }

    [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fB() {
      std::printf("In fB()\n");
      fC();
    }

    [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fA() {
      std::printf("In fA()\n");
      fB();
    }

    // Avoid infinite recursion in case the logging function is instrumented (so calls logging
    //   function again).
    [[clang::xray_never_instrument]] void simplyPrint(int32_t functionId, XRayEntryType xret)
    {
      printf("XRay: functionId=%d type=%d.\n", int(functionId), int(xret));
    }

    int main(int argc, char* argv[]) {
      __xray_set_handler(simplyPrint);

      printf("Patching...\n");
      __xray_patch();
      fA();

      printf("Unpatching...\n");
      __xray_unpatch();
      fA();

      return 0;
    }

gives the following output:

    Patching...
    XRay: functionId=3 type=0.
    In fA()
    XRay: functionId=3 type=1.
    XRay: functionId=2 type=0.
    In fB()
    XRay: functionId=2 type=1.
    XRay: functionId=1 type=0.
    XRay: functionId=1 type=1.
    In fC()
    Unpatching...
    In fA()
    In fB()
    In fC()

So for function fC() the exit sled seems to be called too much before function
exit: before printing In fC().

Debugging shows that the above happens because printf from fC is also called as
a tail call. So first the exit sled of fC is executed, and only then printf is
jumped into. So it seems we can't do anything about this with the current
approach (i.e. within the simplification described in
https://reviews.llvm.org/D23988 ).

Differential Revision: https://reviews.llvm.org/D25030

llvm-svn: 284456
2016-10-18 05:54:15 +00:00
Justin Bogner b7c9deb587 Object: Add a missing return in ObjectFile::createObjectFile
When Error was threaded through these APIs back in r265606 the
"return" was missed here, which triggers a warning if/when I add
LLVM_NODISCARD to the Error type.

llvm-svn: 284454
2016-10-18 05:17:23 +00:00
Craig Topper 448358b5f1 [X86] Fix DecodeVPERMVMask to handle cases where the constant pool entry has a different type than the shuffle itself.
This is especially important for 32-bit targets with 64-bit shuffle elements.

llvm-svn: 284453
2016-10-18 04:48:33 +00:00
Craig Topper 7268bf99ab [AVX-512] Fix DecodeVPERMV3Mask to handle cases where the constant pool entry has a different type than the shuffle itself.
Summary: This is especially important for 32-bit targets with 64-bit shuffle elements.This is similar to how PSHUFB and VPERMIL handle the same problem.

Reviewers: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25666

llvm-svn: 284451
2016-10-18 04:00:32 +00:00
Craig Topper 175a415e78 [AVX-512] Add support for decoding shuffle mask from constant pool for masked VPERMILPS/PD.
llvm-svn: 284450
2016-10-18 03:36:52 +00:00
Mandeep Singh Grang e82678a657 Fix differences in codegen between Linux and Windows toolchains
Summary:
    There are differences in codegen between Linux and Windows due to:
    1. Using std::sort which uses quicksort which is a non-stable sort.

    2. Iterating over Set data structure where the iteration order is
       non deterministic.

Reviewers: arsenm, grosbach, junbuml, zinob, MatzeB

Subscribers: MatzeB, wdng

Differential Revision: https://reviews.llvm.org/D25695

llvm-svn: 284441
2016-10-18 00:11:19 +00:00
Zachary Turner 0d31d9c012 Rename HexStyle -> HexFormatStyle, and remove a constexpr.
This should fix the remaining broken builds.

llvm-svn: 284437
2016-10-17 23:08:47 +00:00
Zachary Turner 7cd0745c95 Resubmit "Add support for advanced number formatting."
This resubmits commits 284425 and r284428, which were reverted
in r284429 due to some infinite recursion caused by an incorrect
selection of function overloads.  Reproduced the failure on Linux
using GCC 4.8.4, and confirmed that with the new patch the tests
path on GCC as well as MSVC.  So hopefully this fixes everything.

llvm-svn: 284436
2016-10-17 22:49:24 +00:00
Konstantin Zhuravlyov 98a3ac7106 [AMDGPU] Mark .note section SHF_ALLOC so lld creates a segment for it
Differential Revision: https://reviews.llvm.org/D25694

llvm-svn: 284435
2016-10-17 22:40:15 +00:00
Justin Lebar ee34a7343d [ADT] Move CachedHashString to its own header in ADT, and rename to CachedHashStringRef.
Summary:
Reclaiming the name 'CachedHashString' will let us add a type with that
name that owns its value.

Reviewers: timshen

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25644

llvm-svn: 284434
2016-10-17 22:24:36 +00:00
Kevin Enderby 2490de06f7 Next set of additional error checks for invalid Mach-O files for the
load commands that use the MachO::sub_framework_command,
MachO::sub_umbrella_command, MachO::sub_library_command
and MachO::sub_client_command types but are not used in llvm
libObject code but used in llvm tool code.

This includes the LC_SUB_FRAMEWORK, LC_SUB_UMBRELLA,
LC_SUB_LIBRARY and LC_SUB_CLIENT load commands.

llvm-svn: 284431
2016-10-17 22:09:25 +00:00
Zachary Turner 9d58e362d2 Revert formatting changes.
This reverts r288425 and r284428 as they are causing test crashes
on some systems.

llvm-svn: 284429
2016-10-17 21:25:41 +00:00
Zachary Turner 47e2c0a9cb Try to fix build after invalid pointer conversion.
llvm-svn: 284428
2016-10-17 21:14:27 +00:00
Zachary Turner 99eef2d736 [Support] Add support for "advanced" number formatting.
raw_ostream has not afforded a lot of flexibility in terms of
how to format numbers when outputting.  Wrap this all up into
a set of low level helper functions that can be used to output
numbers with arbitrary precision, alignment, format, etc and
then update raw_ostream to use these functions.

This will be useful for upcoming improvements to llvm's string
formatting libraries, but are still useful independently.

Differential Revision: https://reviews.llvm.org/D25497

llvm-svn: 284425
2016-10-17 20:57:45 +00:00
Sanjay Patel 523cd8290a [DAG] use isConstOrConstSplat in ComputeNumSignBits to optimize SRA
The scalar version of this pattern was noted in:
https://reviews.llvm.org/D25485

and fixed with:
https://reviews.llvm.org/rL284395

More refactoring of the constant/splat helpers is needed and will happen in follow-up patches.

Differential Revision: https://reviews.llvm.org/D25685

llvm-svn: 284424
2016-10-17 20:41:39 +00:00
Sanjay Patel a7cab58055 [DAG] make isConstOrConstSplat and isConstOrConstSplatFP more accessible; NFC
As noted in:
https://reviews.llvm.org/D25685

This is the next-to-smallest step needed to enable the ComputeNumSignBits fix in that patch. 
In a minor attempt to keep some structure, we're pulling the FP helper over along with its
integer sibling, but clearly we can and should do more refactoring of the similar helper
functions in DAGCombiner and SelectionDAG to simplify and not duplicate functionality.

llvm-svn: 284421
2016-10-17 20:26:46 +00:00
Davide Italiano 84bd58e915 [opt] Strip coverage if debug info is not present.
If -coverage is passed, but -g is not, clang populates the PassManager
pipeline with StripSymbols(debugOnly = true).
The stripSymbol pass therefore scans the list of named metadata,
drops !llvm.dbg.cu, but leaves !llvm.gcov and !0 (the compileUnit MD)
around. The verifier runs, and finds out that there's a CU not listed
in !llvm.dbg.cu (as it was previously dropped) -> crash.
When we strip debug info, so, check if there's coverage data,
and strip it as well, in order to avoid pending metadata left around.

Differential Revision:  https://reviews.llvm.org/D25689

llvm-svn: 284418
2016-10-17 20:05:35 +00:00
Dehao Chen 018a3afa99 Ignore debug info when making optimization decisions in SimplifyCFG.
Summary: Debug info should *not* affect code generation. This patch properly handles debug info to make sure the generated code are the same with or without debug info.

Reviewers: davidxl, mzolotukhin, jmolloy

Subscribers: aprantl, llvm-commits

Differential Revision: https://reviews.llvm.org/D25286

llvm-svn: 284415
2016-10-17 19:28:44 +00:00
Michael LeMay 14153552fd Test commit.
llvm-svn: 284411
2016-10-17 19:09:19 +00:00
Walter Erquinigo b58d6a5655 Handle relocations to thumb functions when dynamic linking COFF modules
Summary:
This adds the necessary logic to support relocations to thumb functions in the COFF dynamic linker.
The jumps to function addresses are mostly blx, which requires the ISA selection bit when jumping to a thumb function.

Note: I'm determining if the relocation requires the ISA bit when creating the relocation entries and not when resolving the relocation. I have to do that because I need the ObjectFile and the actual Symbol, which are available only when creating the entries. It would require a gross refactor if I do it otherwise, but I'm okay with doing it if you think it's better.

Reviewers: peter.smith, compnerd

Subscribers: rengolin, sas

Differential Revision: https://reviews.llvm.org/D25151

llvm-svn: 284410
2016-10-17 18:56:18 +00:00
Tim Northover 020d104496 GlobalISel: support wider range of load/store sizes in AArch64.
llvm-svn: 284406
2016-10-17 18:36:53 +00:00
Tom Stellard 083f1626f5 AMDGPU/SI: LowerParameter() should be computing align based on memory type
Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, tony-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D25203

llvm-svn: 284398
2016-10-17 16:56:19 +00:00
Tom Stellard bc6c523cce AMDGPU/SI: Fix LowerParameter() for i16 arguments
Summary:
If we are loading an i16 value from a 32-bit memory location, then
we need to be able to truncate the loaded value to i16.

Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, tony-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D25198

llvm-svn: 284397
2016-10-17 16:21:45 +00:00
Sanjay Patel 2cf6bfaf73 [DAG] optimize away an arithmetic-right-shift of a 0 or -1 value
This came up as part of:
https://reviews.llvm.org/D25485

Note that the vector case is missed because ComputeNumSignBits() is deficient for vectors.

llvm-svn: 284395
2016-10-17 15:58:28 +00:00
Teresa Johnson c0ef9e4316 Rename interface for querying physical hardware concurrency
Based on post-commit review for D25585/r284180, rename
hardware_physical_concurrency to heavyweight_hardware_concurrency,
to better reflect what type of tasks it should be used for and
to enable other systems to map this to something other than the
number of physical cores.

llvm-svn: 284390
2016-10-17 14:56:53 +00:00
Benjamin Kramer 937dd7af2d [Support] remove_dots: Remove .. from absolute paths.
/../foo is still a proper path after removing the dotdot. This should
now finally match https://9p.io/sys/doc/lexnames.html [Cleaning names].

llvm-svn: 284384
2016-10-17 13:28:21 +00:00
James Molloy aa79b19a3e [SDAG] Use ABI type alignment for constant pools when optimizing for size
SelectionDAG::getConstantPool will automatically determine an appropriate alignment if one is not specified. It does this by querying the type's preferred alignment. This can end up creating quite a lot of padding when the preferred alignment for vectors is 128.

In optimize-for-size mode, it makes sense to instead query the ABI type alignment which is often smaller and causes less padding.

llvm-svn: 284381
2016-10-17 12:54:07 +00:00
Oliver Stannard fe4432b105 [SimplifyCFG] Don't lower complex ConstantExprs to lookup tables
Not all ConstantExprs can be represented by a global variable, for example most
pointer arithmetic other than addition of a constant, so we can't convert these
values from switch statements to lookup tables.

Differential Revision: https://reviews.llvm.org/D25550

llvm-svn: 284379
2016-10-17 12:00:24 +00:00
Tobias Grosser 2bbec0ee7f [SCEV] Consider delinearization pattern with extension with identity factor
Summary: The delinearization algorithm did not consider terms which had an extension without a multiply factor, i.e. a identify factor. We lose cases where size is char type where there will no multiply factor.

Reviewers: sanjoy, grosser

Subscribers: mzolotukhin, Eugene.Zelenko, llvm-commits, mssimpso, sanjoy, grosser

Differential Revision: https://reviews.llvm.org/D16492

llvm-svn: 284378
2016-10-17 11:56:26 +00:00
Andrea Di Biagio fa90c692db [CodeGenPrepare] When moving a zext near to its associated load, do not retain the original debug location.
CodeGenPrepare knows how to move a zext of a load into the same basic block
where the load lives. The goal is to help ISel match a zero-extending load
instead of two separated instructions.

CGP attempts to move a zext computation even if it lives in a basic block that
does not post-dominate the load's basic block. That means, the hoisted zext may
be speculated. Preserving the zext location would hurt the debugging experience
and the quality of sample pgo.
With this patch, when moving a zext near to its associated load, CGP no longer
propagates the zext's debug location. Instead, CGP conservatively reuses the
same debug location for the load and the zext.

An alternative approach would be to assign an artificial line-0 location to the
zext. However we don't want to over-use the 'line-0' for this particular case
because it would have a size cost in the line-table section for no additional
benefit.

Differential Revision: https://reviews.llvm.org/D25611

llvm-svn: 284377
2016-10-17 11:32:26 +00:00
Craig Topper 1f5178ff9f [X86] Fix shuffle decoding assertions to print the right number of required operands. Update the checks themselves to be >= to the same number instead of > one less than the required number.
llvm-svn: 284365
2016-10-17 06:41:18 +00:00
Craig Topper 5b24cd31f5 [AVX-512] Add shuffle combining support for vpermi2var shuffles derived from existing support for vpermt2var.
llvm-svn: 284357
2016-10-17 04:26:47 +00:00
Craig Topper 715ad7fef5 [AVX-512] Add support for turning a 256-bit load that goes to both halfs of an insert_subvector into a subvector broadcast.
Differential Revision: https://reviews.llvm.org/D25650

llvm-svn: 284353
2016-10-16 23:29:51 +00:00
Justin Bogner 1674261886 Support: Return void from Scanner::scan_ns_uri_char, no one uses the result
Simplify this a little bit since the result is never used. It can be
added back easily enough if that changes.

llvm-svn: 284348
2016-10-16 22:01:22 +00:00
Richard Smith e3a9a676f9 PR30711: Fix incorrect profiling of 'long long' in FoldingSet, then use it to
fix TBAA violation in profiling of pointers.

llvm-svn: 284336
2016-10-16 17:49:09 +00:00
Craig Topper aa1370ac57 [AVX-512] Fix the operand order for vpermi2var_qi intrinsics to match the other vpermi2var intrinsics.
llvm-svn: 284329
2016-10-16 04:54:35 +00:00
Craig Topper 4729fe8bb6 [AVX-512] Correct execution domain for VPERMT2PS and VPERMI2PS.
llvm-svn: 284328
2016-10-16 04:54:31 +00:00
Craig Topper f18b9201f5 [AVX-512] Move (v4i64 (X86SubVBroadcast (v2i64))) alternate patterns under a HasVLX predicate. Similar for floating point.
llvm-svn: 284327
2016-10-16 04:54:26 +00:00
Davide Italiano e9cdb24f67 [ArmFastISel] Kill dead code. NFCI.
llvm-svn: 284320
2016-10-16 01:09:39 +00:00
Konstantin Zhuravlyov 8ea0246e93 [MachineMemOperand] Move synchronization scope and atomic orderings from SDNode to MachineMemOperand, and remove redundant getAtomic* member functions from SelectionDAG.
Differential Revision: https://reviews.llvm.org/D24577

llvm-svn: 284312
2016-10-15 22:01:18 +00:00
Davide Italiano 590ad7037e [GVN/PRE] Hoist global values outside of loops.
In theory this could be generalized to move anything where
we prove the operands are available, but that would require
rewriting PRE. As NewGVN will hopefully come soon, and we're
trying to rewrite PRE in terms of NewGVN+MemorySSA, it's probably
not worth spending too much time on it. Fix provided by
Daniel Berlin!

llvm-svn: 284311
2016-10-15 21:35:23 +00:00
Li Huang 755f75f765 Test commit. (NFC)
llvm-svn: 284307
2016-10-15 19:00:04 +00:00
Craig Topper dde865afb5 [AVX-512] Add shuffle comments for vbroadcast instructions.
llvm-svn: 284305
2016-10-15 16:26:07 +00:00
Craig Topper 51e052f741 [AVX-512] Rename VPBROADCASTI32X2 and VPBROADCASTF32X2 instruction classes to match the mnemonic which does not include a 'P'.
llvm-svn: 284304
2016-10-15 16:26:02 +00:00
Benjamin Kramer d8b079708d [SimplifyCFG] Use the error checking provided by getPrevNode.
BasicBlock::size is O(insts), making this loop O(blocks*insts), which
can be really slow on generated code. getPrevNode already checks if
we're at the beginning of the block and returns nullptr if so, just use
that instead. No functionality change intended.

llvm-svn: 284303
2016-10-15 13:15:05 +00:00
Kostya Serebryany 9a4b10a56f [libFuzzer] swap bytes in integers when handling CMP traces
llvm-svn: 284301
2016-10-15 04:00:07 +00:00
Kostya Serebryany f9b8e8b117 [libFuzzer] better algorithm for -minimize_crash
llvm-svn: 284299
2016-10-15 01:00:24 +00:00
Tom Stellard 961811c906 AMDGPU/SI: Handle s_getreg hazard in GCNHazardRecognizer
Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25526

llvm-svn: 284298
2016-10-15 00:58:14 +00:00
Evgeny Astigeevich 48fd87e4aa [NFC] Loop Versioning for LICM code clean up
- Removed unused class members.
- Made class internal data private.
- Made class scoped data function scoped where it's possible.
- Replace naked new/delete with unique_ptr.
- Made resources guaranteed to be freed.

Differential Revision: https://reviews.llvm.org/D25464

llvm-svn: 284290
2016-10-14 23:00:36 +00:00
Tim Northover 69fa84a6e9 GlobalISel: rename legalizer components to match others.
The previous names were both misleading (the MachineLegalizer actually
contained the info tables) and inconsistent with the selector & translator (in
having a "Machine") prefix. This should make everything sensible again.

The only functional change is the name of a couple of command-line options.

llvm-svn: 284287
2016-10-14 22:18:18 +00:00
Mehdi Amini f8fd20402a hardware_physical_concurrency() should return 1 when LLVM is built with LLVM_ENABLE_THREADS=OFF
llvm-svn: 284283
2016-10-14 21:32:35 +00:00
Guozhi Wei 0cd65429be [PPC] Shorter sequence to load 64bit constant with same hi/lo words
This is a patch to implement pr30640.

When a 64bit constant has the same hi/lo words, we can use rldimi to copy the low word into high word of the same register.

This optimization caused failure of test case bperm.ll because of not optimal heuristic in function SelectAndParts64. It chooses AND or ROTATE to extract bit groups from a register, and OR them together. This optimization lowers the cost of loading 64bit constant mask used in AND method, and causes different code sequence. But actually ROTATE method is better in this test case. The reason is in ROTATE method the final OR operation can be avoided since rldimi can insert the rotated bits into target register directly. So this patch also enhances SelectAndParts64 to prefer ROTATE method when the two methods have same cost and there are multiple bit groups need to be ORed together.

Differential Revision: https://reviews.llvm.org/D25521

llvm-svn: 284276
2016-10-14 20:41:50 +00:00
Kostya Serebryany e450e40741 [libFuzzer] remove subdir fuzzer-test-suite as it is now superseded with https://github.com/google/fuzzer-test-suite
llvm-svn: 284275
2016-10-14 20:26:40 +00:00
Kostya Serebryany a5f94fb6c9 [libFuzzer] add -trace_cmp=1 (guiding mutations based on the observed CMP instructions). This is a reincarnation of the previously deleted -use_traces, but using a different approach for collecting traces. Still a toy, but at least it scales well. Also fix -merge in trace-pc-guard mode
llvm-svn: 284273
2016-10-14 20:20:33 +00:00
Sanjay Patel 72b5ff646d [DAG] avoid creating illegal node when transforming negated shifted sign bit
Eli noted this potential bug in the post-commit thread for:
https://reviews.llvm.org/rL284239
...but I'm not sure how to trigger it, so there's no test case yet.

llvm-svn: 284268
2016-10-14 19:46:31 +00:00
Tom Stellard 09c2bd6bd4 AMDGPU/SI: Use new SimplifyDemandedBits helper for multi-use operations
Summary:
We are using this helper for our 24-bit arithmetic combines, so we are now able to eliminate multi-use operations that mask the high-bits of 24-bit inputs (e.g. and x, 0xffffff)

Reviewers: arsenm, nhaehnle

Subscribers: tony-tye, arsenm, kzhuravl, wdng, nhaehnle, llvm-commits, yaxunl

Differential Revision: https://reviews.llvm.org/D24672

llvm-svn: 284267
2016-10-14 19:14:29 +00:00
Tom Stellard ab61007914 TargetLowering: Add SimplifyDemandedBits() helper to TargetLoweringOpt
Summary:
The main purpose of this new helper is to enable simplifying operations that
have multiple uses.  SimplifyDemandedBits does not handle multiple uses
currently, and this new function makes it possible to optimize:

and v1, v0, 0xffffff
mul24 v2, v1, v1      ; Multiply ignoring high 8-bits.

To:

mul24 v2, v0, v0

Where before this would not be optimized, because v1 has multiple uses.

Reviewers: bogner, arsenm

Subscribers: nhaehnle, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D24964

llvm-svn: 284266
2016-10-14 19:14:26 +00:00
Krzysztof Parzyszek 775a20913d The real fix for post-r284255 failures
llvm-svn: 284264
2016-10-14 19:06:25 +00:00
Krzysztof Parzyszek a22daa0fa6 Workaround to eliminate check-llvm failures after r284255
llvm-svn: 284262
2016-10-14 18:36:42 +00:00
David L Kreitzer 01a057a0c4 Add a pass to optimize patterns of vectorized interleaved memory accesses for
X86. The pass optimizes as a unit the entire wide load + shuffles pattern
produced by interleaved vectorization. This initial patch optimizes one pattern
(64-bit elements interleaved by a factor of 4). Future patches will generalize
to additional patterns.

Patch by Farhana Aleen

Differential revision: http://reviews.llvm.org/D24681

llvm-svn: 284260
2016-10-14 18:20:41 +00:00
Tom Stellard 64a9d0876c AMDGPU/SI: Don't allow unaligned scratch access
Summary: The hardware doesn't support this.

Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25523

llvm-svn: 284257
2016-10-14 18:10:39 +00:00
Krzysztof Parzyszek 445bd12621 [RDF] Switch RegisterRef to be a pair (Register, LaneMask)
Use PackedRegisterRef to store the register information in the graph nodes.

This commit also removes support for virtual registers. It has never been
tested or used. It will be possible to add it back if there is a need.

llvm-svn: 284255
2016-10-14 17:57:55 +00:00
David L Kreitzer d5c6755d83 [safestack] Use non-thread-local unsafe stack pointer for Contiki OS
Patch by Michael LeMay

Differential revision: http://reviews.llvm.org/D19852

llvm-svn: 284254
2016-10-14 17:56:00 +00:00
Eric Christopher c39f8b0a3a Revert "In preparation for removing getNameWithPrefix off of
TargetMachine," as it's causing sanitizer/memory issues until I
can track down this set.

This reverts commit r284203

llvm-svn: 284252
2016-10-14 17:28:23 +00:00
Vedant Kumar 743574b8e3 [Coverage] Support loading multiple binaries into a CoverageMapping
Add support for loading multiple coverage readers into a single
CoverageMapping instance. This should make it easier to prepare a
unified coverage report for multiple binaries.

Differential Revision: https://reviews.llvm.org/D25535

llvm-svn: 284251
2016-10-14 17:16:53 +00:00
Rafael Espindola 4a2ef91143 Move alignTo computation inside the if.
This is an improvement when compiling with llvm. llvm doesn't inline
the call to insert, so the align is always executed and shows up in
the profile.

With gcc the call to insert is inlined and the align computation moved
and done only if needed.

With this patch we explicitly only compute it if it is needed.

In the two tests with debug info, the speedup was

scylla
  master 3.008959365
  patch  2.932080942 1.02621974786x faster

firefox
  master 6.709823604
  patch  6.592387227 1.01781393795x faster

In all others the difference was in the noise.

llvm-svn: 284249
2016-10-14 17:01:39 +00:00
Pierre Gousseau b6d652adb5 [X86] Take advantage of the lzcnt instruction on btver2 architectures when ORing comparisons to zero.
This change adds transformations such as:
  zext(or(setcc(eq, (cmp x, 0)), setcc(eq, (cmp y, 0))))
  To:
  srl(or(ctlz(x), ctlz(y)), log2(bitsize(x))
This optimisation is beneficial on Jaguar architecture only, where lzcnt has a good reciprocal throughput.
Other architectures such as Intel's Haswell/Broadwell or AMD's Bulldozer/PileDriver do not benefit from it.
For this reason the change also adds a "HasFastLZCNT" feature which gets enabled for Jaguar.

Differential Revision: https://reviews.llvm.org/D23446

llvm-svn: 284248
2016-10-14 16:41:38 +00:00
Sanjay Patel 6d6eca5cdc [InstCombine] use m_APInt to allow sub with constant folds for splat vectors
llvm-svn: 284247
2016-10-14 16:31:54 +00:00
Sanjay Patel c6c5965a42 [InstCombine] sub X, sext(bool Y) -> add X, zext(bool Y)
Prefer add/zext because they are better supported in terms of value-tracking.

Note that the backend should be prepared for this IR canonicalization 
(including vector types) after:
https://reviews.llvm.org/rL284015

Differential Revision: https://reviews.llvm.org/D25135

llvm-svn: 284241
2016-10-14 15:24:31 +00:00
David L Kreitzer 3155abfb57 Define "contiki" OS specifier.
Patch by Michael LeMay

Differential revision: http://reviews.llvm.org/D24897

llvm-svn: 284240
2016-10-14 14:41:46 +00:00
Sanjay Patel 00fc7a6159 [DAG] add folds for negated shifted sign bit
The same folds exist in InstCombine already.

This came up as part of:
https://reviews.llvm.org/D25485

llvm-svn: 284239
2016-10-14 14:26:47 +00:00
Nicolai Haehnle 67624af0cc AMDGPU: Select 64-bit {ADD,SUB}{C,E} nodes
Summary:
This will be used for 64-bit MULHU, which is in turn used for the 64-bit
divide-by-constant optimization (see D24822).

Reviewers: arsenm, tstellarAMD

Subscribers: kzhuravl, wdng, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25289

llvm-svn: 284224
2016-10-14 10:30:00 +00:00
Nicolai Haehnle 86e72d98dd Fix use-after-frees
Extracted from D25313, as suggested by Justin Bogner.

llvm-svn: 284220
2016-10-14 09:49:51 +00:00
Simon Dardis b3fd189cb5 [mips] Fix aui/daui/dahi/dati for MIPSR6
For compatiblity with binutils, define these instructions to take
two registers with a 16bit unsigned immediate. Both of the registers
have to be same for dahi and dati.

Reviewers: dsanders, zoran.jovanovic

Differential Review: https://reviews.llvm.org/D21473

llvm-svn: 284218
2016-10-14 09:31:42 +00:00
Nicolai Haehnle bd15c3267f AMDGPU: Fix use-after-frees
Reviewers: arsenm, tstellarAMD

Subscribers: kzhuravl, wdng, yaxunl, tony-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D25312

llvm-svn: 284215
2016-10-14 09:03:04 +00:00
Michael Zuckerman 174d2e784b [x86][ms-inline-asm] use of "jmp short" in asm is not supported
Committing in the name of Ziv Izhar: After check-all and LGTM .

The following patch is for compatability with Microsoft.
Microsoft ignores the keyword "short" when used after a jmp, for example:
__asm {
      jmp short label
      label:
      }

A test for that patch will be added in another patch, since it's located in clang's codegen tests. Link will be added shortly.
link to test: https://reviews.llvm.org/D24958

Differential Revision: https://reviews.llvm.org/D24957

llvm-svn: 284211
2016-10-14 08:09:40 +00:00
Craig Topper 40feb7f157 [DAGCombiner] Teach createBuildVecShuffle to handle cases where input vectors are less than half of the output vector size.
This will be needed by a future commit to support sign/zero extending from v8i8 to v8i64 which requires a sign/zero_extend_vector_inreg to be created which requires v8i8 to be concatenated upto v64i8 and goes through this code.

llvm-svn: 284204
2016-10-14 06:00:42 +00:00
Eric Christopher 2bd52b5d91 In preparation for removing getNameWithPrefix off of TargetMachine,
sink the current behavior into the callers and sink
TargetMachine::getNameWithPrefix into TargetMachine::getSymbol.

llvm-svn: 284203
2016-10-14 05:47:41 +00:00
Eric Christopher 445c952bd0 Tidy the calls to getCurrentSection().first -> getCurrentSectionOnly to help
readability a bit.

llvm-svn: 284202
2016-10-14 05:47:37 +00:00
Konstantin Zhuravlyov c96b5d7073 [AMDGPU] Emit 32-bit lo/hi got and pc relative variant kinds for external and global address space variables
Differential Revision: https://reviews.llvm.org/D25562

llvm-svn: 284196
2016-10-14 04:37:34 +00:00
Konstantin Zhuravlyov 2a2ac37c2c [AMDGPU] Add 32-bit lo/hi got and pc relative variant kinds and emit appropriate relocations
Differential Revision: https://reviews.llvm.org/D25548

llvm-svn: 284195
2016-10-14 04:21:32 +00:00
Matthias Braun ac28703111 Timer: Fix doxygen comments, use member initializer; NFC
llvm-svn: 284181
2016-10-14 00:17:19 +00:00
Teresa Johnson 2bd812c5dc Add interface for querying physical hardware concurrency
Summary:
This will be used by ThinLTO to set the amount of backend
parallelism, which performs better when restricted to the number
of physical cores (on X86 at least, where getHostNumPhysicalCores is
currently defined). If not available this falls back to
thread::hardware_concurrency.

Note I didn't add to the thread class since that is a typedef to
std::thread where available.

Reviewers: mehdi_amini

Subscribers: beanz, llvm-commits, mgorny

Differential Revision: https://reviews.llvm.org/D25585

llvm-svn: 284180
2016-10-14 00:13:59 +00:00
Saleem Abdulrasool 7705c4f1be CodeGen: use MSVC division on windows itanium
Windows itanium is identical to MSVC when dealing with everything but C++.
Lower the math routines into msvcrt rather than compiler-rt.

llvm-svn: 284175
2016-10-13 23:00:11 +00:00
Saleem Abdulrasool 06383dd272 CodeGen: adjust floating point operations in Windows itanium
Windows itanium is equivalent to MSVC except in C++ mode.  Ensure that the
promote the 32-bit floating point operations to their 64-bit equivalences.

llvm-svn: 284173
2016-10-13 22:38:15 +00:00
Sanjay Patel 98d0ea64ca [DAG] hoist DL(N) and fix formatting; NFC
llvm-svn: 284170
2016-10-13 22:27:10 +00:00
Kostya Serebryany 0381374f96 [libFuzzer] more detailed message for disabled leak detection
llvm-svn: 284169
2016-10-13 22:24:10 +00:00
Tom Stellard f80c1875a3 LegalizeDAG: Implement PROMOTE for ISD::BITREVERSE
Summary:
This operation is promoted the same way was ISD::BSWAP.  This will
prevent a regression in test/Target/AMDGOU/bitreverse.ll when i16
support is implemented.

Reviewers: bogner, hfinkel

Subscribers: hfinkel, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D25202

llvm-svn: 284163
2016-10-13 21:03:49 +00:00
David L Kreitzer 9015316df4 [safestack] Reapply r283248 after moving X86-targeted SafeStack tests into
the X86 subdirectory. Original commit message:

Requires a valid TargetMachine to be passed to the SafeStack pass.

Patch by Michael LeMay

Differential revision: http://reviews.llvm.org/D24896

llvm-svn: 284161
2016-10-13 20:57:51 +00:00
Sriraman Tallam f29fa586e1 New llc option pie-copy-relocations to optimize access to extern globals.
This option indicates copy relocations support is available from the linker
when building as PIE and allows accesses to extern globals to avoid the GOT.

Differential Revision: https://reviews.llvm.org/D24849

llvm-svn: 284160
2016-10-13 20:54:39 +00:00
Nirav Dave a81682aad4 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r284151 which appears to be triggering a LTO
failures on Hexagon

llvm-svn: 284157
2016-10-13 20:23:25 +00:00
Quentin Colombet 52ffa6711b [RAGreedy] Empty live-ranges always succeed in last chance recoloring.
Relax the constraint for empty live-ranges while doing last chance
recoloring. Indeed, those live-ranges do not need an actual color to be
fond for the recoloring to work.
Empty live-range may happen as a result of splitting/spilling.

Unfortunately no test case for in-tree targets.

llvm-svn: 284152
2016-10-13 19:27:48 +00:00
Nirav Dave 4b36957243 In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Retrying after upstream changes.

   Simplify Consecutive Merge Store Candidate Search

   Now that address aliasing is much less conservative, push through
   simplified store merging search which only checks for parallel stores
   through the chain subgraph. This is cleaner as the separation of
   non-interfering loads/stores from the store-merging logic.

   Whem merging stores, search up the chain through a single load, and
   finds all possible stores by looking down from through a load and a
   TokenFactor to all stores visited. This improves the quality of the
   output SelectionDAG and generally the output CodeGen (with some
   exceptions).

   Additional Minor Changes:

       1. Finishes removing unused AliasLoad code
       2. Unifies the the chain aggregation in the merged stores across
       code paths
       3. Re-add the Store node to the worklist after calling
       SimplifyDemandedBits.
       4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
       arbitrary, but seemed sufficient to not cause regressions in
       tests.

   This finishes the change Matt Arsenault started in r246307 and
   jyknight's original patch.

   Many tests required some changes as memory operations are now
   reorderable. Some tests relying on the order were changed to use
   volatile memory operations

   Noteworthy tests:

    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.

    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -

      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.

    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?

    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.

    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores

    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls

    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores

    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.

    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.

    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.

    CodeGen/AMDGPU/vgpr-spill-emergency-stack-slot-compute.ll -
      This test appears to work but no longer exhibits the spill behavior.

Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel

Differential Revision: https://reviews.llvm.org/D14834

llvm-svn: 284151
2016-10-13 19:20:16 +00:00
Kostya Serebryany a17d23eaa7 [libFuzzer] add -trace_malloc= flag
llvm-svn: 284149
2016-10-13 19:06:46 +00:00
Quentin Colombet b3f5a8c644 [AArch64][RegisterBankInfo] Switch to fully static opds mapping for G_BITCAST.
NFC.

llvm-svn: 284146
2016-10-13 18:46:38 +00:00
Teresa Johnson 7943fecee8 Add interface to compute number of physical cores on host system
Summary:
For now I have only added support for x86_64 Linux, but other systems
can be added incrementally.

This is to be used for setting the default parallelism for ThinLTO
backends (instead of thread::hardware_concurrency which includes
hyperthreading and is too aggressive). I'll send this as a follow-on
patch, and it will fall back to hardware_concurrency when the new
getHostNumPhysicalCores returns -1 (when not supported for a given
host system).

I also added an interface to MemoryBuffer to force reading a file
as a stream - this is required for /proc/cpuinfo which is a special
file that looks like a normal file but appears to have 0 size.
The existing readers of this file in Host.cpp are reading the first
1024 or so bytes from it, because the necessary info is near the top.
But for the new functionality we need to be able to read the entire
file. I can go back and change the other readers to use the new
getFileAsStream as a follow-on patch since it seems much more robust.

Added a unittest.

Reviewers: mehdi_amini

Subscribers: beanz, mgorny, llvm-commits, modocache

Differential Revision: https://reviews.llvm.org/D25564

llvm-svn: 284138
2016-10-13 17:43:20 +00:00
Reid Kleckner edfc9dcf42 Truncate long names in type records
In the MS ABI, the frontend is supposed to MD5 such pathologically long
names. LLVM should still defend itself from long names, though.

Fixes part of PR29098.

llvm-svn: 284136
2016-10-13 17:33:22 +00:00
Igor Breger 8409c356ad [X86][AVX512] Fix sext v32i1 -> v32i8 lowering.
Fix PR30600.

Differential Revision: https://reviews.llvm.org/D25554

llvm-svn: 284134
2016-10-13 17:20:38 +00:00
Kostya Serebryany 17d176e16b [libFuzzer] reapply r283946: refactoring to speed things up, NFC. Now with a fix for gcc build
llvm-svn: 284132
2016-10-13 16:19:09 +00:00
Reid Kleckner 468e793fea Fix for PR30687. Avoid dereferencing MBB.end().
We don't need to return a MachineInstr* from these stack probe insertion
calls anyway. If we ever need to add it back, we can return an iterator
instead.

Based on a patch by David Kreitzer

This bug is a consequence of

r279314 | dexonsmith | 2016-08-19 13:40:12 -0700 (Fri, 19 Aug 2016) | 110 lines

We hit the "Assertion `!NodePtr->isKnownSentinel()' failed" assertion,
but only when inserting a stack probe call at the end of an MBB, which
isn't necessarily a common situation.

Differential Revision: https://reviews.llvm.org/D25566

llvm-svn: 284130
2016-10-13 15:48:48 +00:00
Eric Liu af5a28fe87 Do not delete leading ../ in remove_dots.
Reviewers: bkramer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D25561

llvm-svn: 284129
2016-10-13 15:07:14 +00:00
Javed Absar 85874a9360 [ARM]: Assign cost of scaling used in addressing mode for ARM cores
This patch assigns cost of the scaling used in addressing.
On many ARM cores, a negated register offset takes longer than a
non-negated register offset, in a register-offset addressing mode.

For instance:

LDR R0, [R1, R2 LSL #2]
LDR R0, [R1, -R2 LSL #2]

Above, (1) takes less cycles than (2).

By assigning appropriate scaling factor cost, we enable the LLVM
to make the right trade-offs in the optimization and code-selection phase.

Differential Revision: http://reviews.llvm.org/D24857

Reviewers: jmolloy, rengolin
llvm-svn: 284127
2016-10-13 14:57:43 +00:00
Matthew Simpson 1d4b163fc0 [LV] Account for predicated stores in instruction costs
This patch ensures that we scale the estimated cost of predicated stores by
block probability. This is a follow-on patch for r284123.

llvm-svn: 284126
2016-10-13 14:54:31 +00:00
Matthew Simpson 6cdb5a6f96 [LV] Avoid rounding errors for predicated instruction costs
This patch modifies the cost calculation of predicated instructions (div and
rem) to avoid the accumulation of rounding errors due to multiple truncating
integer divisions. The calculation for predicated stores will be addressed in a
follow-on patch since we currently don't scale the cost of predicated stores by
block probability.

Differential Revision: https://reviews.llvm.org/D25333

llvm-svn: 284123
2016-10-13 14:19:48 +00:00
Simon Pilgrim cb59b5257c [DAGCombiner] Add vector support to (mul (shl X, Y), Z) -> (shl (mul X, Z), Y) style combines
llvm-svn: 284122
2016-10-13 14:04:35 +00:00
Matt Arsenault 253640e18d AMDGPU: Assume spilling will occur at -O0
Because everything live is spilled at the end of a
block by fast regalloc, assume this will happen and
avoid the copies of the resource descriptor.

llvm-svn: 284119
2016-10-13 13:10:00 +00:00
Simon Pilgrim fa8fadc0e5 [DAGCombiner] Add vector support to C2-(A+C1) -> (C2-C1)-A folding
llvm-svn: 284117
2016-10-13 12:49:31 +00:00
Matt Arsenault dac31db12f AMDGPU: Fix truncate to bool warnings
llvm-svn: 284116
2016-10-13 12:45:16 +00:00
Simon Dardis 515e8699f4 [mips] Add IAS support for dvp, evp
These instructions were only defined for microMIPSR6 previously. Add
definitions for MIPSR6, correct definitions for microMIPSR6, flag these
instructions as having unmodelled side effects (they disable/enable
virtual processors) and add missing disassember tests for microMIPSR6.

Reviewers: vkalintiris

Differential Review: https://reviews.llvm.org/D24291

llvm-svn: 284115
2016-10-13 12:12:56 +00:00
Simon Pilgrim 833b8a2071 [DAGCombiner] Add vector support to (sub -1, x) -> (xor x, -1) canonicalization
Improves commutation potential

llvm-svn: 284113
2016-10-13 12:05:20 +00:00
Oren Ben Simhon 92ccbf20ff [X86] Basic additions to support RegCall Calling Convention.
The Register Calling Convention (RegCall) was introduced by Intel to optimize parameter transfer on function call.
This calling convention ensures that as many values as possible are passed or returned in registers.
This commit presents the basic additions to LLVM CodeGen in order to support RegCall in X86.

Differential Revision: http://reviews.llvm.org/D25022

llvm-svn: 284108
2016-10-13 07:53:43 +00:00
Daniel Jasper bee9dea306 Silence unused warning in non-assert builds.
llvm-svn: 284107
2016-10-13 06:39:44 +00:00
Craig Topper ff23af4299 [AVX-512] Teach shuffle lowering to recognize 512-bit zero extends.
llvm-svn: 284105
2016-10-13 05:29:41 +00:00
Craig Topper 8cb2efa58a [X86] Simplify the lowering code for extracting and inserting subvectors.
We don't need to check if AVX is enabled. It's implied by the operation action being set to Custom.
We don't need to check both the input and output type widths. We only need to check the type that's being inserted or extracted. The other type is known to be a legal type and we can assume its a different width.

llvm-svn: 284102
2016-10-13 04:14:47 +00:00
Sebastian Pop 5068d7a338 Memory-SSA: strengthen defClobbersUseOrDef interface
As Danny pointed out, defClobbersUseOrDef should use MemoryLocOrCall to make
sure fences are properly handled.

llvm-svn: 284099
2016-10-13 03:23:33 +00:00
Sebastian Pop 5ba9f24ed7 commit back "GVN-hoist: fix store past load dependence analysis (PR30216, PR30499)"
This is with an extra change to avoid calling MemoryLocation::get() on a call instruction.

Differential Revision: https://reviews.llvm.org/D25542

llvm-svn: 284098
2016-10-13 01:39:10 +00:00
Quentin Colombet 6b87a3109c [AArch64][RegisterBankInfo] Provide alternative mappings for 64-bit load
This allows RegBankSelect in greedy mode to get rid some of the cross
register bank copies when loads are involved in the chain of
computation.

llvm-svn: 284097
2016-10-13 01:01:23 +00:00
Reid Kleckner 741d8a21d3 Correct PrivateLinkage for COFF
- Use storage class C_STAT for 'PrivateLinkage' The storage class for
  PrivateLinkage should equal to the Internal Linkage.

- Set 'PrivateGlobalPrefix' from "L" to ".L" for MM_WinCOFF (includes
  x86_64) MM_WinCOFF has empty GlobalPrefix '\0' so PrivateGlobalPrefix
  "L" may conflict to the normal symbol name starting with 'L'.

Based on a patch by Han Sangjin! Manually updated test cases.

llvm-svn: 284096
2016-10-13 00:55:24 +00:00
Quentin Colombet cd80e97e88 [AArch64][RegisterBankInfo] Provide alternative mappings for G_BITCASTs.
Thanks to this patch, RegBankSelect is able to get rid of some register
bank copies as demonstrated in the test case.

llvm-svn: 284094
2016-10-13 00:34:48 +00:00
Reid Kleckner 8958f6a529 Revert "GVN-hoist: fix store past load dependence analysis (PR30216, PR30499)"
This CL didn't actually address the test case in PR30499, and clang
still crashes.

Also revert dependent change "Memory-SSA cleanup of clobbers interface, NFC"

Reverts r283965 and r283967.

llvm-svn: 284093
2016-10-13 00:18:26 +00:00
Quentin Colombet 45c9c1432f [AArch64][RegisterBankInfo] Describe cross regbank copies statically.
NFC.

llvm-svn: 284091
2016-10-13 00:12:06 +00:00
Quentin Colombet 9e64919b7c [AArch64][RegisterBankInfo] Use static mapping for same bank G_BITCAST.
NFC.

llvm-svn: 284090
2016-10-13 00:12:04 +00:00
Quentin Colombet db643d9091 [AArch64][MachineLegalizer] Mark more G_BITCAST as legal.
Basically any vector types that fits in a 32-bit register is also valid
as far as copies are concerned.

llvm-svn: 284089
2016-10-13 00:12:01 +00:00
Quentin Colombet f760799c40 [AArch64][RegisterBankInfo] Bump the cost of vector loads.
This does not change anything yet, because we do not offer any
alternative mapping.

llvm-svn: 284088
2016-10-13 00:11:59 +00:00
Quentin Colombet f35a8c5bdc [AArch64][RegisterBankInfo] Use a proper cost for cross regbank G_BITCASTs.
This does not change anything yet, because we do not offer any
alternative mapping.

llvm-svn: 284087
2016-10-13 00:11:57 +00:00
Quentin Colombet 27b40356f7 [AArch64][RegisterBankInfo] Provide more realistic copy costs.
llvm-svn: 284086
2016-10-13 00:11:55 +00:00
Krzysztof Parzyszek abc0662f04 Handle lane masks in LivePhysRegs when adding live-ins
Differential Revision: https://reviews.llvm.org/D25533

llvm-svn: 284076
2016-10-12 22:53:41 +00:00
Tim Northover fb8d989818 GlobalISel: support G_TRUNC selection on AArch64.
Ahmed's patch again.

llvm-svn: 284075
2016-10-12 22:49:15 +00:00