Commit Graph

60324 Commits

Author SHA1 Message Date
Sanjay Patel 08b5e68ef6 [InstCombine] add/adjust test for NaN checks; NFC
llvm-svn: 356383
2019-03-18 17:37:05 +00:00
Nirav Dave 55c921f4bf [DAG] Cleanup unused node in SimplifySelectCC.
Delete temporarily constructed node uses for analysis after it's use,
holding onto original input nodes. Ideally this would be rewritten
without making nodes, but this appears relatively complex.

Reviewers: spatel, RKSimon, craig.topper

Subscribers: jdoerfert, hiraditya, deadalnix, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D57921

llvm-svn: 356382
2019-03-18 17:02:38 +00:00
Neil Henning 523dab0788 [AMDGPU] Add an experimental buffer fat pointer address space.
Add an experimental buffer fat pointer address space that is currently
unhandled in the backend. This commit reserves address space 7 as a
non-integral pointer repsenting the 160-bit fat pointer (128-bit buffer
descriptor + 32-bit offset) that is heavily used in graphics workloads
using the AMDGPU backend.

Differential Revision: https://reviews.llvm.org/D58957

llvm-svn: 356373
2019-03-18 14:44:28 +00:00
Sanjay Patel 6063393536 [InstCombine] allow general vector constants for funnel shift to shift transforms
Follow-up to:
rL356338
rL356369

We can calculate an arbitrary vector constant minus the bitwidth, so there's
no need to limit this transform to scalars and splats.

llvm-svn: 356372
2019-03-18 14:27:51 +00:00
George Rimar faf308b11a [llvm-objcopy] - Calculate the string table section sizes correctly.
This fixes the https://bugs.llvm.org/show_bug.cgi?id=40980.

Previously if string optimization occurred as a result of
StringTableBuilder's finalize() method, the size wasn't updated.

This hopefully also makes the interaction between sections during finalization
processes a bit more clear.

Differential revision: https://reviews.llvm.org/D59488

llvm-svn: 356371
2019-03-18 14:27:41 +00:00
Sanjay Patel 84de8a30a0 [InstCombine] extend rotate-left-by-constant canonicalization to funnel shift
Follow-up to:
rL356338

Rotates are a special case of funnel shift where the 2 input operands
are the same value, but that does not need to be a restriction for the
canonicalization when the shift amount is a constant.

llvm-svn: 356369
2019-03-18 14:10:11 +00:00
Simon Pilgrim f9ab4f5f4e [SystemZ] Remove icmp undef from reduced tests
Pre-commit for D59363 (Add icmp UNDEF handling to SelectionDAG::FoldSetCC)

Approved by @uweigand (Ulrich Weigand)

llvm-svn: 356368
2019-03-18 13:55:28 +00:00
Sanjay Patel d7f1539322 [InstCombine] add funnel shift tests with arbitrary constants; NFC
llvm-svn: 356367
2019-03-18 13:35:51 +00:00
David Stenberg 8a2e4af7e7 [DebugInfo] Ignore bitcasts when lowering stack arg dbg.values
Summary:
Look past bitcasts when looking for parameter debug values that are
described by frame-index loads in `EmitFuncArgumentDbgValue()`.

In the attached test case we would be left with an undef `DBG_VALUE`
for the parameter without this patch.

A similar fix was done for parameters passed in registers in D13005.

This fixes PR40777.

Reviewers: aprantl, vsk, jmorse

Reviewed By: aprantl

Subscribers: bjope, javed.absar, jdoerfert, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D58831

llvm-svn: 356363
2019-03-18 11:27:32 +00:00
Christof Douma 8cfd91dcc7 [AArch64] Fix bug 35094 atomicrmw on Armv8.1-A+lse
Fixes https://bugs.llvm.org/show_bug.cgi?id=35094

The Dead register definition pass should leave alone the atomicrmw
instructions on AArch64 (LTE extension). The reason is the following
statement in the Arm ARM:

"The ST<OP> instructions, and LD<OP> instructions where the destination
register is WZR or XZR, are not regarded as doing a read for the purpose
of a DMB LD barrier."

A good example was given in the gcc thread by Will Deacon (linked in the
bugzilla ticket 35094):

    P0 (atomic_int* y,atomic_int* x) {
      atomic_store_explicit(x,1,memory_order_relaxed);
      atomic_thread_fence(memory_order_release);
      atomic_store_explicit(y,1,memory_order_relaxed);
    }

    P1 (atomic_int* y,atomic_int* x) {
      atomic_fetch_add_explicit(y,1,memory_order_relaxed);  // STADD
      atomic_thread_fence(memory_order_acquire);
      int r0 = atomic_load_explicit(x,memory_order_relaxed);
    }

    P2 (atomic_int* y) {
      int r1 = atomic_load_explicit(y,memory_order_relaxed);
    }

    My understanding is that it is forbidden for r0 == 0 and r1 == 2 after
    this test has executed. However, if the relaxed add in P1 compiles to
    STADD and the subsequent acquire fence is compiled as DMB LD, then we
    don't have any ordering guarantees in P1 and the forbidden result could
    be observed.

Change-Id: I419f9f9df947716932038e1100c18d10a96408d0
llvm-svn: 356360
2019-03-18 09:21:06 +00:00
Matt Arsenault 4873056ced Remove immarg from llvm.expect
The LangRef claimed this was required to be a constant, but this
appears to be wrong.

Fixes bug 41079.

llvm-svn: 356353
2019-03-17 23:16:18 +00:00
Tim Renouf c4e128e221 [CodeGen] Defined MVTs v3i32, v3f32, v5i32, v5f32
AMDGPU would like to use these MVTs.

Differential Revision: https://reviews.llvm.org/D58901

Change-Id: I6125fea810d7cc62a4b4de3d9904255a1233ae4e
llvm-svn: 356351
2019-03-17 22:56:38 +00:00
David Green baa94ef03b [ARM] Check that CPSR does not have other uses
Fix up rL356335 by checking that CPSR is not read between
the compare and the branch.

llvm-svn: 356349
2019-03-17 21:36:15 +00:00
Matt Arsenault e0c1f9e76d AMDGPU: Partially fix default device for HSA
There are a few different issues, mostly stemming from using
generation based checks for anything instead of subtarget
features. Stop adding flat-address-space as a feature for HSA, as it
should only be a device property. This was incorrectly allowing flat
instructions to select for SI.

Increase the default generation for HSA to avoid the encoding error
when emitting objects. This has some other side effects from various
checks which probably should be separate subtarget features (in the
cost model and for dealing with the DS offset folding issue).

Partial fix for bug 41070. It should probably be an error to try using
amdhsa without flat support.

llvm-svn: 356347
2019-03-17 21:31:35 +00:00
Craig Topper affead9ad0 [X86] Remove the _alt forms of AVX512 VPCMP instructions. Use a combination of custom printing and custom parsing to achieve the same result and more
Similar to the previous patch for VPCOM.

Differential Revision: https://reviews.llvm.org/D59398

llvm-svn: 356344
2019-03-17 21:21:40 +00:00
Craig Topper 12509d87f3 [X86] Remove the _alt forms of XOP VPCOM instructions. Use a combination of custom printing and custom parsing to achieve the same result and more
Previously we had a regular form of the instruction used when the immediate was 0-7. And _alt form that allowed the full 8 bit immediate. Codegen would always use the 0-7 form since the immediate was always checked to be in range. Assembly parsing would use the 0-7 form when a mnemonic like vpcomtrueb was used. If the immediate was specified directly the _alt form was used. The disassembler would prefer to use the 0-7 form instruction when the immediate was in range and the _alt form otherwise. This way disassembly would print the most readable form when possible.

The assembly parsing for things like vpcomtrueb relied on splitting the mnemonic into 3 pieces. A "vpcom" prefix, an immediate representing the "true", and a suffix of "b". The tablegenerated printing code would similarly print a "vpcom" prefix, decode the immediate into a string, and then print "b".

The _alt form on the other hand parsed and printed like any other instruction with no specialness.

With this patch we drop to one form and solve the disassembly printing issue by doing custom printing when the immediate is 0-7. The parsing code has been tweaked to turn "vpcomtrueb" into "vpcomb" and then the immediate for the "true" is inserted either before or after the other operands depending on at&t or intel syntax.

I'd rather not do the custom printing, but I tried using an InstAlias for each possible mnemonic for all 8 immediates for all 16 combinations of element size, signedness, and memory/register. The code emitted into printAliasInstr ended up checking the number of operands, the register class of each operand, and the immediate for all 256 aliases. This was repeated for both the at&t and intel printer. Despite a lot of common checks between all of the aliases, when compiled with clang at least this commonality was not well optimized. Nor do all the checks seem necessary. Since I want to do a similar thing for vcmpps/pd/ss/sd which have 32 immediate values and 3 encoding flavors, 3 register sizes, etc. This didn't seem to scale well for clang binary size. So custom printing seemed a better trade off.

I also considered just using the InstAlias for the matching and not the printing. But that seemed like it would add a lot of extra rows to the matcher table. Especially given that the 32 immediates for vpcmpps have 46 strings associated with them.

Differential Revision: https://reviews.llvm.org/D59398

llvm-svn: 356343
2019-03-17 21:21:37 +00:00
Tim Renouf e30aa6a136 [AMDGPU] Prepare for introduction of v3 and v5 MVTs
AMDGPU would like to have MVTs for v3i32, v3f32, v5i32, v5f32. This
commit does not add them, but makes preparatory changes:

* Fixed assumptions of power-of-2 vector type in kernel arg handling,
  and added v5 kernel arg tests and v3/v5 shader arg tests.

* Added v5 tests for cost analysis.

* Added vec3/vec5 arg test cases.

Some of this patch is from Matt Arsenault, also of AMD.

Differential Revision: https://reviews.llvm.org/D58928

Change-Id: I7279d6b4841464d2080eb255ef3c589e268eabcd
llvm-svn: 356342
2019-03-17 21:04:16 +00:00
Simon Pilgrim 10ba65cc48 [AMDGPU] Regenerate some f16/i16 tests.
Prep work for D51589

llvm-svn: 356340
2019-03-17 20:36:12 +00:00
Sanjay Patel b3bcd95771 [InstCombine] canonicalize rotate right by constant to rotate left
This was noted as a backend problem:
https://bugs.llvm.org/show_bug.cgi?id=41057
...and subsequently fixed for x86:
rL356121
But we should canonicalize these in IR for the benefit of all targets
and improve IR analysis such as CSE.

llvm-svn: 356338
2019-03-17 19:08:00 +00:00
Sanjay Patel a3a2f9424e [InstCombine] add tests for rotate by constant using funnel intrinsics; NFC
llvm-svn: 356337
2019-03-17 18:50:39 +00:00
David Green e0b48a8015 [ARM] Search backwards for CMP when combining into CBZ
The constant island pass currently only looks at the instruction immediately
before a branch for a CMP to fold into a CBZ/CBNZ. This extends it to search
backwards for the instruction that defines CPSR. We need to ensure that the
register is not overridden between the CMP and the branch.

Differential Revision: https://reviews.llvm.org/D59317

llvm-svn: 356336
2019-03-17 16:11:22 +00:00
David Green 30673299d4 [ARM] Add some CBZ constant island tests. NFC
llvm-svn: 356335
2019-03-17 16:00:21 +00:00
Nikita Popov 9a4453592b [DAGCombine] Fold (x & ~y) | y patterns
Fold (x & ~y) | y and it's four commuted variants to x | y. This pattern
can in particular appear when a vselect c, x, -1 is expanded to
(x & ~c) | (-1 & c) and combined to (x & ~c) | c.

This change has some overlap with D59066, which avoids creating a
vselect of this form in the first place during uaddsat expansion.

Differential Revision: https://reviews.llvm.org/D59174

llvm-svn: 356333
2019-03-17 15:45:38 +00:00
Sanjay Patel 6a6e808b69 [TargetLowering] improve the default expansion of uaddsat/usubsat
This is a subset of what was proposed in:
D59006
...and may overlap with test changes from:
D59174
...but it seems like a good general optimization to turn selects
into bitwise-logic when possible because we never know exactly
what can happen at this stage of DAG combining depending on how
the target has defined things.

Differential Revision: https://reviews.llvm.org/D59066

llvm-svn: 356332
2019-03-17 14:57:40 +00:00
Alex Bradbury b18e314a7c [RISCV] Fix RISCVAsmParser::ParseRegister and add tests
RISCVAsmParser::ParseRegister is called from AsmParser::parseRegisterOrNumber,
which in turn is called when processing CFI directives. The RISC-V
implementation wasn't setting RegNo, and so was incorrect. This patch address
that and adds cfi directive tests that demonstrate the fix. A follow-up patch
will factor out the register parsing logic shared between ParseRegister and
parseRegister.

llvm-svn: 356329
2019-03-17 12:00:58 +00:00
Simon Pilgrim 3b0a6c69ee [DAGCombine] combineShuffleOfScalars - handle non-zero SCALAR_TO_VECTOR indices (PR41097)
rL356292 reduces the size of scalar_to_vector if we know the upper bits are undef - which means that shuffles may find they are suddenly referencing scalar_to_vector elements other than zero - so make sure we handle this as undef.

llvm-svn: 356327
2019-03-16 17:36:26 +00:00
Yonghong Song 6db6b56a5c [BPF] Add BTF Var and DataSec Support
Two new kinds, BTF_KIND_VAR and BTF_KIND_DATASEC, are added.

BTF_KIND_VAR has the following specification:
   btf_type.name: var name
   btf_type.info: type kind
   btf_type.type: var type
   // btf_type is followed by one u32
   u32: varinfo (currently, only 0 - static, 1 - global allocated in elf sections)

Not all globals are supported in this patch. The following globals are supported:
  . static variables with or without section attributes
  . global variables with section attributes

The inclusion of globals with section attributes
is for future potential extraction of key/value
type id's from map definition.

BTF_KIND_DATASEC has the following specification:
  btf_type.name: section name associated with variable or
                 one of .data/.bss/.readonly
  btf_type.info: type kind and vlen for # of variables
  btf_type.size: 0
  #vlen number of the following:
    u32: id of corresponding BTF_KIND_VAR
    u32: in-session offset of the var
    u32: the size of memory var occupied

At the time of debug info emission, the data section
size is unknown, so the btf_type.size = 0 for
BTF_KIND_DATASEC. The loader can patch it during
loading time.

The in-session offseet of the var is only available
for static variables. For global variables, the
loader neeeds to assign the global variable symbol value in
symbol table to in-section offset.

The size of memory is used to specify the amount of the
memory a variable occupies. Typically, it equals to
the type size, but for certain structures, e.g.,
  struct tt {
    int a;
    int b;
    char c[];
   };
   static volatile struct tt s2 = {3, 4, "abcdefghi"};
The static variable s2 has size of 20.

Note that for BTF_KIND_DATASEC name, the section name
does not contain object name. The compiler does have
input module name. For example, two cases below:
   . clang -target bpf -O2 -g -c test.c
     The compiler knows the input file (module) is test.c
     and can generate sec name like test.data/test.bss etc.
   . clang -target bpf -O2 -g -emit-llvm -c test.c -o - |
     llc -march=bpf -filetype=obj -o test.o
     The llc compiler has the input file as stdin, and
     would generate something like stdin.data/stdin.bss etc.
     which does not really make sense.

For any user specificed section name, e.g.,
  static volatile int a __attribute__((section("id1")));
  static volatile const int b __attribute__((section("id2")));
The DataSec with name "id1" and "id2" does not contain
information whether the section is readonly or not.
The loader needs to check the corresponding elf section
flags for such information.

A simple example:
  -bash-4.4$ cat t.c
  int g1;
  int g2 = 3;
  const int g3 = 4;
  static volatile int s1;
  struct tt {
   int a;
   int b;
   char c[];
  };
  static volatile struct tt s2 = {3, 4, "abcdefghi"};
  static volatile const int s3 = 4;
  int m __attribute__((section("maps"), used)) = 4;
  int test() { return g1 + g2 + g3 + s1 + s2.a + s3 + m; }
  -bash-4.4$ clang -target bpf -O2 -g -S t.c
Checking t.s, 4 BTF_KIND_VAR's are generated (s1, s2, s3 and m).
4 BTF_KIND_DATASEC's are generated with names
".data", ".bss", ".rodata" and "maps".

Signed-off-by: Yonghong Song <yhs@fb.com>

Differential Revision: https://reviews.llvm.org/D59441

llvm-svn: 356326
2019-03-16 15:36:31 +00:00
Simon Pilgrim f2c53b5d6c [X86][SSE] Constant fold PEXTRB/PEXTRW/EXTRACT_VECTOR_ELT nodes.
Replaces existing i1-only fold.

llvm-svn: 356325
2019-03-16 15:02:00 +00:00
Simon Pilgrim 0f472e1d01 [X86] Add SimplifyDemandedBitsForTargetNode support for PEXTRB/PEXTRW
Improved constant folding for PEXTRB/PEXTRW will be added in a future commit

llvm-svn: 356324
2019-03-16 14:29:50 +00:00
Heejin Ahn 66ce419468 [WebAssembly] Make rethrow take an except_ref type argument
Summary:
In the new wasm EH proposal, `rethrow` takes an `except_ref` argument.
This change was missing in r352598.

This patch adds `llvm.wasm.rethrow.in.catch` intrinsic. This is an
intrinsic that's gonna eventually be lowered to wasm `rethrow`
instruction, but this intrinsic can appear only within a catchpad or a
cleanuppad scope. Also this intrinsic needs to be invokable - otherwise
EH pad successor for it will not be correctly generated in clang.

This also adds lowering logic for this intrinsic in
`SelectionDAGBuilder::visitInvoke`. This routine is basically a
specialized and simplified version of
`SelectionDAGBuilder::visitTargetIntrinsic`, but we can't use it
because if is only for `CallInst`s.

This deletes the previous `llvm.wasm.rethrow` intrinsic and related
tests, which was meant to be used within a `__cxa_rethrow` library
function. Turned out this needs some more logic, so the intrinsic for
this purpose will be added later.

LateEHPrepare takes a result value of `catch` and inserts it into
matching `rethrow` as an argument.

`RETHROW_IN_CATCH` is a pseudo instruction that serves as a link between
`llvm.wasm.rethrow.in.catch` and the real wasm `rethrow` instruction. To
generate a `rethrow` instruction, we need an `except_ref` argument,
which is generated from `catch` instruction. But `catch` instrutions are
added in LateEHPrepare pass, so we use `RETHROW_IN_CATCH`, which takes
no argument, until we are able to correctly lower it to `rethrow` in
LateEHPrepare.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59352

llvm-svn: 356316
2019-03-16 05:38:57 +00:00
Heejin Ahn a41250c7be [WebAssembly] Irreducible control flow rewrite
Summary:
Rewrite WebAssemblyFixIrreducibleControlFlow to a simpler and cleaner
design, which directly computes reachability and other properties
itself. This avoids previous complexity and bugs. (The new graph
analyses are very similar to how the Relooper algorithm would find loop
entries and so forth.)

This fixes a few bugs, including where we had a false positive and
thought fannkuch was irreducible when it was not, which made us much
larger and slower there, and a reverse bug where we missed
irreducibility. On fannkuch, we used to be 44% slower than asm2wasm and
are now 4% faster.

Reviewers: aheejin

Subscribers: jdoerfert, mgrang, dschuff, sbc100, jgravelle-google, sunfish, llvm-commits

Differential Revision: https://reviews.llvm.org/D58919

Patch by Alon Zakai (kripken)

llvm-svn: 356313
2019-03-16 03:00:19 +00:00
Fedor Sergeev 6a9c2f4f98 [TimePasses] allow -time-passes reporting into a custom stream
TimePassesHandler object (implementation of time-passes for new pass manager)
gains ability to report into a stream customizable per-instance (per pipeline).

Intended use is to specify separate time-passes output stream per each compilation,
setting up TimePasses member of StandardInstrumentation during PassBuilder setup.
That allows to get independent non-overlapping pass-times reports for parallel
independent compilations (in JIT-like setups).

By default it still puts timing reports into the info-output-file stream
(created by CreateInfoOutputFile every time report is requested).

Unit-test added for non-default case, and it also allowed to discover that print() does not work
as declared - it did not reset the timers, leading to yet another report being printed into the default stream.
Fixed print() to actually reset timers according to what was declared in print's comments before.

Reviewed By: philip.pfaffe
Differential Revision: https://reviews.llvm.org/D59366

llvm-svn: 356305
2019-03-15 22:15:23 +00:00
Eli Friedman 68d9a60573 [ARM] Add MachineVerifier logic for some Thumb1 instructions.
tMOVr and tPUSH/tPOP/tPOP_RET have register constraints which can't be
expressed in TableGen, so check them explicitly. I've unfortunately run
into issues with both of these recently; hopefully this saves some time
for someone else in the future.

Differential Revision: https://reviews.llvm.org/D59383

llvm-svn: 356303
2019-03-15 21:44:49 +00:00
Roman Lebedev 9f37790608 [X86] X86ISelLowering::combineSextInRegCmov(): also handle i8 CMOV's
Summary:
As noted by @andreadb in https://reviews.llvm.org/D59035#inline-525780

If we have `sext (trunc (cmov C0, C1) to i8)`,
we can instead do `cmov (sext (trunc C0 to i8)), (sext (trunc C1 to i8))`

Reviewers: craig.topper, andreadb, RKSimon

Reviewed By: craig.topper

Subscribers: llvm-commits, andreadb

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59412

llvm-svn: 356301
2019-03-15 21:18:05 +00:00
Roman Lebedev b6e376ddfa [X86] Promote i8 CMOV's (PR40965)
Summary:
@mclow.lists brought up this issue up in IRC, it came up during
implementation of libc++ `std::midpoint()` implementation (D59099)
https://godbolt.org/z/oLrHBP

Currently LLVM X86 backend only promotes i8 CMOV if it came from 2x`trunc`.
This differential proposes to always promote i8 CMOV.

There are several concerns here:
* Is this actually more performant, or is it just the ASM that looks cuter?
* Does this result in partial register stalls?
* What about branch predictor?

# Indeed, performance should be the main point here.
Let's look at a simple microbenchmark: {F8412076}
```
#include "benchmark/benchmark.h"

#include <algorithm>
#include <cmath>
#include <cstdint>
#include <iterator>
#include <limits>
#include <random>
#include <type_traits>
#include <utility>
#include <vector>

// Future preliminary libc++ code, from Marshall Clow.
namespace std {
template <class _Tp>
__inline _Tp midpoint(_Tp __a, _Tp __b) noexcept {
  using _Up = typename std::make_unsigned<typename remove_cv<_Tp>::type>::type;

  int __sign = 1;
  _Up __m = __a;
  _Up __M = __b;
  if (__a > __b) {
    __sign = -1;
    __m = __b;
    __M = __a;
  }
  return __a + __sign * _Tp(_Up(__M - __m) >> 1);
}
}  // namespace std

template <typename T>
std::vector<T> getVectorOfRandomNumbers(size_t count) {
  std::random_device rd;
  std::mt19937 gen(rd());
  std::uniform_int_distribution<T> dis(std::numeric_limits<T>::min(),
                                       std::numeric_limits<T>::max());
  std::vector<T> v;
  v.reserve(count);
  std::generate_n(std::back_inserter(v), count,
                  [&dis, &gen]() { return dis(gen); });
  assert(v.size() == count);
  return v;
}

struct RandRand {
  template <typename T>
  static std::pair<std::vector<T>, std::vector<T>> Gen(size_t count) {
    return std::make_pair(getVectorOfRandomNumbers<T>(count),
                          getVectorOfRandomNumbers<T>(count));
  }
};
struct ZeroRand {
  template <typename T>
  static std::pair<std::vector<T>, std::vector<T>> Gen(size_t count) {
    return std::make_pair(std::vector<T>(count, T(0)),
                          getVectorOfRandomNumbers<T>(count));
  }
};

template <class T, class Gen>
void BM_StdMidpoint(benchmark::State& state) {
  const size_t Length = state.range(0);

  const std::pair<std::vector<T>, std::vector<T>> Data =
      Gen::template Gen<T>(Length);
  const std::vector<T>& a = Data.first;
  const std::vector<T>& b = Data.second;
  assert(a.size() == Length && b.size() == a.size());

  benchmark::ClobberMemory();
  benchmark::DoNotOptimize(a);
  benchmark::DoNotOptimize(a.data());
  benchmark::DoNotOptimize(b);
  benchmark::DoNotOptimize(b.data());

  for (auto _ : state) {
    for (size_t i = 0; i < Length; i++) {
      const auto calculated = std::midpoint(a[i], b[i]);
      benchmark::DoNotOptimize(calculated);
    }
  }
  state.SetComplexityN(Length);
  state.counters["midpoints"] =
      benchmark::Counter(Length, benchmark::Counter::kIsIterationInvariant);
  state.counters["midpoints/sec"] =
      benchmark::Counter(Length, benchmark::Counter::kIsIterationInvariantRate);
  const size_t BytesRead = 2 * sizeof(T) * Length;
  state.counters["bytes_read/iteration"] =
      benchmark::Counter(BytesRead, benchmark::Counter::kDefaults,
                         benchmark::Counter::OneK::kIs1024);
  state.counters["bytes_read/sec"] = benchmark::Counter(
      BytesRead, benchmark::Counter::kIsIterationInvariantRate,
      benchmark::Counter::OneK::kIs1024);
}

template <typename T>
static void CustomArguments(benchmark::internal::Benchmark* b) {
  const size_t L2SizeBytes = 2 * 1024 * 1024;
  // What is the largest range we can check to always fit within given L2 cache?
  const size_t MaxLen = L2SizeBytes / /*total bufs*/ 2 /
                        /*maximal elt size*/ sizeof(T) / /*safety margin*/ 2;
  b->RangeMultiplier(2)->Range(1, MaxLen)->Complexity(benchmark::oN);
}

// Both of the values are random.
// The comparison is unpredictable.
BENCHMARK_TEMPLATE(BM_StdMidpoint, int32_t, RandRand)
    ->Apply(CustomArguments<int32_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, uint32_t, RandRand)
    ->Apply(CustomArguments<uint32_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, int64_t, RandRand)
    ->Apply(CustomArguments<int64_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, uint64_t, RandRand)
    ->Apply(CustomArguments<uint64_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, int16_t, RandRand)
    ->Apply(CustomArguments<int16_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, uint16_t, RandRand)
    ->Apply(CustomArguments<uint16_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, int8_t, RandRand)
    ->Apply(CustomArguments<int8_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, uint8_t, RandRand)
    ->Apply(CustomArguments<uint8_t>);

// One value is always zero, and another is bigger or equal than zero.
// The comparison is predictable.
BENCHMARK_TEMPLATE(BM_StdMidpoint, uint32_t, ZeroRand)
    ->Apply(CustomArguments<uint32_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, uint64_t, ZeroRand)
    ->Apply(CustomArguments<uint64_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, uint16_t, ZeroRand)
    ->Apply(CustomArguments<uint16_t>);
BENCHMARK_TEMPLATE(BM_StdMidpoint, uint8_t, ZeroRand)
    ->Apply(CustomArguments<uint8_t>);
```

```
$ ~/src/googlebenchmark/tools/compare.py --no-utest benchmarks ./llvm-cmov-bench-OLD ./llvm-cmov-bench-NEW
RUNNING: ./llvm-cmov-bench-OLD --benchmark_out=/tmp/tmp5a5qjm
2019-03-06 21:53:31
Running ./llvm-cmov-bench-OLD
Run on (8 X 4000 MHz CPU s)
CPU Caches:
  L1 Data 16K (x8)
  L1 Instruction 64K (x4)
  L2 Unified 2048K (x4)
  L3 Unified 8192K (x1)
Load Average: 1.78, 1.81, 1.36
----------------------------------------------------------------------------------------------------
Benchmark                                          Time             CPU   Iterations UserCounters<...>
----------------------------------------------------------------------------------------------------
<...>
BM_StdMidpoint<int32_t, RandRand>/131072      300398 ns       300404 ns         2330 bytes_read/iteration=1024k bytes_read/sec=3.25083G/s midpoints=305.398M midpoints/sec=436.319M/s
BM_StdMidpoint<int32_t, RandRand>_BigO          2.29 N          2.29 N
BM_StdMidpoint<int32_t, RandRand>_RMS              2 %             2 %
<...>
BM_StdMidpoint<uint32_t, RandRand>/131072     300433 ns       300433 ns         2330 bytes_read/iteration=1024k bytes_read/sec=3.25052G/s midpoints=305.398M midpoints/sec=436.278M/s
BM_StdMidpoint<uint32_t, RandRand>_BigO         2.29 N          2.29 N
BM_StdMidpoint<uint32_t, RandRand>_RMS             2 %             2 %
<...>
BM_StdMidpoint<int64_t, RandRand>/65536       169857 ns       169858 ns         4121 bytes_read/iteration=1024k bytes_read/sec=5.74929G/s midpoints=270.074M midpoints/sec=385.828M/s
BM_StdMidpoint<int64_t, RandRand>_BigO          2.59 N          2.59 N
BM_StdMidpoint<int64_t, RandRand>_RMS              3 %             3 %
<...>
BM_StdMidpoint<uint64_t, RandRand>/65536      169770 ns       169771 ns         4125 bytes_read/iteration=1024k bytes_read/sec=5.75223G/s midpoints=270.336M midpoints/sec=386.026M/s
BM_StdMidpoint<uint64_t, RandRand>_BigO         2.59 N          2.59 N
BM_StdMidpoint<uint64_t, RandRand>_RMS             3 %             3 %
<...>
BM_StdMidpoint<int16_t, RandRand>/262144      591169 ns       591179 ns         1182 bytes_read/iteration=1024k bytes_read/sec=1.65189G/s midpoints=309.854M midpoints/sec=443.426M/s
BM_StdMidpoint<int16_t, RandRand>_BigO          2.25 N          2.25 N
BM_StdMidpoint<int16_t, RandRand>_RMS              1 %             1 %
<...>
BM_StdMidpoint<uint16_t, RandRand>/262144     591264 ns       591274 ns         1184 bytes_read/iteration=1024k bytes_read/sec=1.65162G/s midpoints=310.378M midpoints/sec=443.354M/s
BM_StdMidpoint<uint16_t, RandRand>_BigO         2.25 N          2.25 N
BM_StdMidpoint<uint16_t, RandRand>_RMS             1 %             1 %
<...>
BM_StdMidpoint<int8_t, RandRand>/524288      2983669 ns      2983689 ns          235 bytes_read/iteration=1024k bytes_read/sec=335.156M/s midpoints=123.208M midpoints/sec=175.718M/s
BM_StdMidpoint<int8_t, RandRand>_BigO           5.69 N          5.69 N
BM_StdMidpoint<int8_t, RandRand>_RMS               0 %             0 %
<...>
BM_StdMidpoint<uint8_t, RandRand>/524288     2668398 ns      2668419 ns          262 bytes_read/iteration=1024k bytes_read/sec=374.754M/s midpoints=137.363M midpoints/sec=196.479M/s
BM_StdMidpoint<uint8_t, RandRand>_BigO          5.09 N          5.09 N
BM_StdMidpoint<uint8_t, RandRand>_RMS              0 %             0 %
<...>
BM_StdMidpoint<uint32_t, ZeroRand>/131072     300887 ns       300887 ns         2331 bytes_read/iteration=1024k bytes_read/sec=3.24561G/s midpoints=305.529M midpoints/sec=435.619M/s
BM_StdMidpoint<uint32_t, ZeroRand>_BigO         2.29 N          2.29 N
BM_StdMidpoint<uint32_t, ZeroRand>_RMS             2 %             2 %
<...>
BM_StdMidpoint<uint64_t, ZeroRand>/65536      169634 ns       169634 ns         4102 bytes_read/iteration=1024k bytes_read/sec=5.75688G/s midpoints=268.829M midpoints/sec=386.338M/s
BM_StdMidpoint<uint64_t, ZeroRand>_BigO         2.59 N          2.59 N
BM_StdMidpoint<uint64_t, ZeroRand>_RMS             3 %             3 %
<...>
BM_StdMidpoint<uint16_t, ZeroRand>/262144     592252 ns       592255 ns         1182 bytes_read/iteration=1024k bytes_read/sec=1.64889G/s midpoints=309.854M midpoints/sec=442.62M/s
BM_StdMidpoint<uint16_t, ZeroRand>_BigO         2.26 N          2.26 N
BM_StdMidpoint<uint16_t, ZeroRand>_RMS             1 %             1 %
<...>
BM_StdMidpoint<uint8_t, ZeroRand>/524288      987295 ns       987309 ns          711 bytes_read/iteration=1024k bytes_read/sec=1012.85M/s midpoints=372.769M midpoints/sec=531.028M/s
BM_StdMidpoint<uint8_t, ZeroRand>_BigO          1.88 N          1.88 N
BM_StdMidpoint<uint8_t, ZeroRand>_RMS              1 %             1 %
RUNNING: ./llvm-cmov-bench-NEW --benchmark_out=/tmp/tmpPvwpfW
2019-03-06 21:56:58
Running ./llvm-cmov-bench-NEW
Run on (8 X 4000 MHz CPU s)
CPU Caches:
  L1 Data 16K (x8)
  L1 Instruction 64K (x4)
  L2 Unified 2048K (x4)
  L3 Unified 8192K (x1)
Load Average: 1.17, 1.46, 1.30
----------------------------------------------------------------------------------------------------
Benchmark                                          Time             CPU   Iterations UserCounters<...>
----------------------------------------------------------------------------------------------------
<...>
BM_StdMidpoint<int32_t, RandRand>/131072      300878 ns       300880 ns         2324 bytes_read/iteration=1024k bytes_read/sec=3.24569G/s midpoints=304.611M midpoints/sec=435.629M/s
BM_StdMidpoint<int32_t, RandRand>_BigO          2.29 N          2.29 N
BM_StdMidpoint<int32_t, RandRand>_RMS              2 %             2 %
<...>
BM_StdMidpoint<uint32_t, RandRand>/131072     300231 ns       300226 ns         2330 bytes_read/iteration=1024k bytes_read/sec=3.25276G/s midpoints=305.398M midpoints/sec=436.578M/s
BM_StdMidpoint<uint32_t, RandRand>_BigO         2.29 N          2.29 N
BM_StdMidpoint<uint32_t, RandRand>_RMS             2 %             2 %
<...>
BM_StdMidpoint<int64_t, RandRand>/65536       170819 ns       170777 ns         4115 bytes_read/iteration=1024k bytes_read/sec=5.71835G/s midpoints=269.681M midpoints/sec=383.752M/s
BM_StdMidpoint<int64_t, RandRand>_BigO          2.60 N          2.60 N
BM_StdMidpoint<int64_t, RandRand>_RMS              3 %             3 %
<...>
BM_StdMidpoint<uint64_t, RandRand>/65536      171705 ns       171708 ns         4106 bytes_read/iteration=1024k bytes_read/sec=5.68733G/s midpoints=269.091M midpoints/sec=381.671M/s
BM_StdMidpoint<uint64_t, RandRand>_BigO         2.62 N          2.62 N
BM_StdMidpoint<uint64_t, RandRand>_RMS             3 %             3 %
<...>
BM_StdMidpoint<int16_t, RandRand>/262144      592510 ns       592516 ns         1182 bytes_read/iteration=1024k bytes_read/sec=1.64816G/s midpoints=309.854M midpoints/sec=442.425M/s
BM_StdMidpoint<int16_t, RandRand>_BigO          2.26 N          2.26 N
BM_StdMidpoint<int16_t, RandRand>_RMS              1 %             1 %
<...>
BM_StdMidpoint<uint16_t, RandRand>/262144     614823 ns       614823 ns         1180 bytes_read/iteration=1024k bytes_read/sec=1.58836G/s midpoints=309.33M midpoints/sec=426.373M/s
BM_StdMidpoint<uint16_t, RandRand>_BigO         2.33 N          2.33 N
BM_StdMidpoint<uint16_t, RandRand>_RMS             4 %             4 %
<...>
BM_StdMidpoint<int8_t, RandRand>/524288      1073181 ns      1073201 ns          650 bytes_read/iteration=1024k bytes_read/sec=931.791M/s midpoints=340.787M midpoints/sec=488.527M/s
BM_StdMidpoint<int8_t, RandRand>_BigO           2.05 N          2.05 N
BM_StdMidpoint<int8_t, RandRand>_RMS               1 %             1 %
BM_StdMidpoint<uint8_t, RandRand>/524288     1071010 ns      1071020 ns          653 bytes_read/iteration=1024k bytes_read/sec=933.689M/s midpoints=342.36M midpoints/sec=489.522M/s
BM_StdMidpoint<uint8_t, RandRand>_BigO          2.05 N          2.05 N
BM_StdMidpoint<uint8_t, RandRand>_RMS              1 %             1 %
<...>
BM_StdMidpoint<uint32_t, ZeroRand>/131072     300413 ns       300416 ns         2330 bytes_read/iteration=1024k bytes_read/sec=3.2507G/s midpoints=305.398M midpoints/sec=436.302M/s
BM_StdMidpoint<uint32_t, ZeroRand>_BigO         2.29 N          2.29 N
BM_StdMidpoint<uint32_t, ZeroRand>_RMS             2 %             2 %
<...>
BM_StdMidpoint<uint64_t, ZeroRand>/65536      169667 ns       169669 ns         4123 bytes_read/iteration=1024k bytes_read/sec=5.75568G/s midpoints=270.205M midpoints/sec=386.257M/s
BM_StdMidpoint<uint64_t, ZeroRand>_BigO         2.59 N          2.59 N
BM_StdMidpoint<uint64_t, ZeroRand>_RMS             3 %             3 %
<...>
BM_StdMidpoint<uint16_t, ZeroRand>/262144     591396 ns       591404 ns         1184 bytes_read/iteration=1024k bytes_read/sec=1.65126G/s midpoints=310.378M midpoints/sec=443.257M/s
BM_StdMidpoint<uint16_t, ZeroRand>_BigO         2.26 N          2.26 N
BM_StdMidpoint<uint16_t, ZeroRand>_RMS             1 %             1 %
<...>
BM_StdMidpoint<uint8_t, ZeroRand>/524288     1069421 ns      1069413 ns          655 bytes_read/iteration=1024k bytes_read/sec=935.092M/s midpoints=343.409M midpoints/sec=490.258M/s
BM_StdMidpoint<uint8_t, ZeroRand>_BigO          2.04 N          2.04 N
BM_StdMidpoint<uint8_t, ZeroRand>_RMS              0 %             0 %
Comparing ./llvm-cmov-bench-OLD to ./llvm-cmov-bench-NEW
Benchmark                                                   Time             CPU      Time Old      Time New       CPU Old       CPU New
----------------------------------------------------------------------------------------------------------------------------------------
<...>
BM_StdMidpoint<int32_t, RandRand>/131072                 +0.0016         +0.0016        300398        300878        300404        300880
<...>
BM_StdMidpoint<uint32_t, RandRand>/131072                -0.0007         -0.0007        300433        300231        300433        300226
<...>
BM_StdMidpoint<int64_t, RandRand>/65536                  +0.0057         +0.0054        169857        170819        169858        170777
<...>
BM_StdMidpoint<uint64_t, RandRand>/65536                 +0.0114         +0.0114        169770        171705        169771        171708
<...>
BM_StdMidpoint<int16_t, RandRand>/262144                 +0.0023         +0.0023        591169        592510        591179        592516
<...>
BM_StdMidpoint<uint16_t, RandRand>/262144                +0.0398         +0.0398        591264        614823        591274        614823
<...>
BM_StdMidpoint<int8_t, RandRand>/524288                  -0.6403         -0.6403       2983669       1073181       2983689       1073201
<...>
BM_StdMidpoint<uint8_t, RandRand>/524288                 -0.5986         -0.5986       2668398       1071010       2668419       1071020
<...>
BM_StdMidpoint<uint32_t, ZeroRand>/131072                -0.0016         -0.0016        300887        300413        300887        300416
<...>
BM_StdMidpoint<uint64_t, ZeroRand>/65536                 +0.0002         +0.0002        169634        169667        169634        169669
<...>
BM_StdMidpoint<uint16_t, ZeroRand>/262144                -0.0014         -0.0014        592252        591396        592255        591404
<...>
BM_StdMidpoint<uint8_t, ZeroRand>/524288                 +0.0832         +0.0832        987295       1069421        987309       1069413
```

What can we tell from the benchmark?
* `BM_StdMidpoint<[u]int8_t, RandRand>` indeed has the worst performance.
* All `BM_StdMidpoint<uint{8,16,32}_t, ZeroRand>` are all performant, even the 8-bit case.
  That is because there we are computing mid point between zero and some random number,
  thus if the branch predictor is in use, it is in optimal situation.
* Promoting 8-bit CMOV did improve performance of `BM_StdMidpoint<[u]int8_t, RandRand>`, by -59%..-64%.

# What about branch predictor?
* `BM_StdMidpoint<uint8_t, ZeroRand>` was faster than `BM_StdMidpoint<uint{16,32,64}_t, ZeroRand>`,
  which may mean that well-predicted branch is better than `cmov`.
* Promoting 8-bit CMOV degraded performance of `BM_StdMidpoint<uint8_t, ZeroRand>`,
  `cmov` is up to +10% worse than well-predicted branch.
* However, i do not believe this is a concern. If the branch is well predicted,  then the PGO
  will also say that it is well predicted, and LLVM will happily expand cmov back into branch:
  https://godbolt.org/z/P5ufig

# What about partial register stalls?
I'm not really able to answer that.
What i can say is that if the branch is unpredictable (if it is predictable, then use PGO and you'll have branch)
in ~50% of cases you will have to pay branch misprediction penalty.
```
$ grep -i MispredictPenalty X86Sched*.td
X86SchedBroadwell.td:  let MispredictPenalty = 16;
X86SchedHaswell.td:  let MispredictPenalty = 16;
X86SchedSandyBridge.td:  let MispredictPenalty = 16;
X86SchedSkylakeClient.td:  let MispredictPenalty = 14;
X86SchedSkylakeServer.td:  let MispredictPenalty = 14;
X86ScheduleBdVer2.td:  let MispredictPenalty = 20; // Minimum branch misdirection penalty.
X86ScheduleBtVer2.td:  let MispredictPenalty = 14; // Minimum branch misdirection penalty
X86ScheduleSLM.td:  let MispredictPenalty = 10;
X86ScheduleZnver1.td:  let MispredictPenalty = 17;
```
.. which it can be as small as 10 cycles and as large as 20 cycles.
Partial register stalls do not seem to be an issue for AMD CPU's.
For intel CPU's, they should be around ~5 cycles?
Is that actually an issue here? I'm not sure.

In short, i'd say this is an improvement, at least on this microbenchmark.

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=40965 | PR40965 ]].

Reviewers: craig.topper, RKSimon, spatel, andreadb, nikic

Reviewed By: craig.topper, andreadb

Subscribers: jfb, jdoerfert, llvm-commits, mclow.lists

Tags: #llvm, #libc

Differential Revision: https://reviews.llvm.org/D59035

llvm-svn: 356300
2019-03-15 21:17:53 +00:00
Nikita Popov 1a26144ff5 [AArch64] Turn BIC immediate creation into a DAG combine
Switch BIC immediate creation for vector ANDs from custom lowering
to a DAG combine, which gives generic DAG combines a change to
apply first. In particular this avoids (and x, -1) being turned into
a (bic x, 0) instead of being eliminated entirely.

Differential Revision: https://reviews.llvm.org/D59187

llvm-svn: 356299
2019-03-15 21:04:34 +00:00
Changpeng Fang 989ec59c9f AMDGPU: Fix a SIAnnotateControlFlow issue when there are multiple backedges.
Summary:
At the exit of the loop, the compiler uses a register to remember and accumulate
the number of threads that have already exited. When all active threads exit the
loop, this register is used to restore the exec mask, and the execution continues
for the post loop code.

When there is a "continue" in the loop, the compiler made a mistake to reset the
register to 0 when the "continue" backedge is taken. This will result in some
threads not executing the post loop code as they are supposed to.

This patch fixed the issue.

Reviewers:
  nhaehnle, arsenm

Differential Revision:
  https://reviews.llvm.org/D59312

llvm-svn: 356298
2019-03-15 21:02:48 +00:00
Philip Reames 68a2e4d48b [SimplifyDemandedVec] Strengthen handling all undef lanes (particularly GEPs)
A change of two parts:
1) A generic enhancement for all callers of SDVE to exploit the fact that if all lanes are undef, the result is undef.
2) A GEP specific piece to strengthen/fix the vector index undef element handling, and call into the generic infrastructure when visiting the GEP.

The result is that we replace a vector gep with at least one undef in each lane with a undef.  We can also do the same for vector intrinsics.  Once the masked.load patch (D57372) has landed, I'll update to include call tests as well.

Differential Revision: https://reviews.llvm.org/D57468

llvm-svn: 356293
2019-03-15 19:54:06 +00:00
Simon Pilgrim d33e62c826 [X86][SSE] Fold scalar_to_vector(i64 anyext(x)) -> bitcast(scalar_to_vector(i32 anyext(x)))
Reduce the size of an any-extended i64 scalar_to_vector source to i32 - the any_extend nodes are often introduced by SimplifyDemandedBits.

llvm-svn: 356292
2019-03-15 19:14:28 +00:00
Sanjay Patel 052d1b7b66 [InstCombine] add tests for logic of NaN fcmps; NFC
llvm-svn: 356287
2019-03-15 18:14:25 +00:00
Philip Reames 6867b1f7de [tests] Add a test for constexpr mask as requested in D57372
llvm-svn: 356285
2019-03-15 18:06:32 +00:00
Sanjay Patel a70c9d49af [InstCombine] add tests for masked store/scatter; NFC
Baseline tests for D57247

llvm-svn: 356283
2019-03-15 18:00:28 +00:00
Amara Emerson d55016b276 [AArch64][GlobalISel] Regbankselect: Fix G_BUILD_VECTOR trying to use s16 gpr sources.
Since we can't insert s16 gprs as we don't have 16 bit GPR registers, we need to
teach RBS to assign them to the FPR bank so our selector works.

llvm-svn: 356282
2019-03-15 18:00:01 +00:00
Philip Reames d238bf7855 [X86][GlobalISEL] Support lowering aligned unordered atomics
The existing lowering code is accidentally correct for unordered atomics as far as I can tell. An unordered atomic has no memory ordering, and simply requires the actual load or store to be done as a single well aligned instruction. As such, relax the restriction while adding tests to ensure the lowering remains correct in the future.

Differential Revision: https://reviews.llvm.org/D57803

llvm-svn: 356280
2019-03-15 17:50:30 +00:00
Yonghong Song 44ed286a2f [BPF] handle external global properly
Previous commit 6bc58e6d3dbd ("[BPF] do not generate unused local/global types")
tried to exclude global variable from type generation. The condition is:
     if (Global.hasExternalLinkage())
       continue;
This is not right. It also excluded initialized globals.

The correct condition (from AssemblyWriter::printGlobal()) is:
  if (!GV->hasInitializer() && GV->hasExternalLinkage())
    Out << "external ";

Let us do the same in BTF type generation. Also added a test for it.

Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356279
2019-03-15 17:39:10 +00:00
Simon Pilgrim 90b700daf1 [AArch64] Regenerate build vector tests
llvm-svn: 356274
2019-03-15 17:17:37 +00:00
Simon Pilgrim 8fbe439345 [SelectionDAG] Add SimplifyDemandedBits handling for ISD::SCALAR_TO_VECTOR
Fixes a lot of constant folding mismatches between i686 and x86_64

llvm-svn: 356273
2019-03-15 17:00:55 +00:00
Simon Pilgrim 65165d54bb [X86] Add SimplifyDemandedBitsForTargetNode support for PINSRB/PINSRW
llvm-svn: 356270
2019-03-15 16:16:49 +00:00
Teresa Johnson 70ec64cb72 [ThinLTO] Restructure AliasSummary to contain ValueInfo of Aliasee
Summary:
The AliasSummary previously contained the AliaseeGUID, which was only
populated when reading the summary from bitcode. This patch changes it
to instead hold the ValueInfo of the aliasee, and always populates it.
This enables more efficient access to the ValueInfo (specifically in the
recent patch r352438 which needed to perform an index hash table lookup
using the aliasee GUID).

As noted in the comments in AliasSummary, we no longer technically need
to keep a pointer to the corresponding aliasee summary, since it could
be obtained by walking the list of summaries on the ValueInfo looking
for the summary in the same module. However, I am concerned that this
would be inefficient when walking through the index during the thin
link for various analyses. That can be reevaluated in the future.

By always populating this new field, we can remove the guard and special
handling for a 0 aliasee GUID when dumping the dot graph of the summary.

An additional improvement in this patch is when reading the summaries
from LLVM assembly we now set the AliaseeSummary field to the aliasee
summary in that same module, which makes it consistent with the behavior
when reading the summary from bitcode.

Reviewers: pcc, mehdi_amini

Subscribers: inglorion, eraman, steven_wu, dexonsmith, arphaman, llvm-commits

Differential Revision: https://reviews.llvm.org/D57470

llvm-svn: 356268
2019-03-15 15:11:38 +00:00
Simon Pilgrim 55e1330eda [Hexagon] Remove icmp undef from reduced tests
Pre-commit for D59363 (Add icmp UNDEF handling to SelectionDAG::FoldSetCC)

Approved by @kparzysz (Krzysztof Parzyszek)

llvm-svn: 356267
2019-03-15 15:07:44 +00:00
Mircea Trofin 2c3ab66539 [llvm] Skip over empty line table entries.
Summary:
This is similar to how addr2line handles consecutive entries with the
same address - pick the last one.

Reviewers: dblaikie, friss, JDevlieghere

Reviewed By: dblaikie

Subscribers: eugenis, vitalybuka, echristo, JDevlieghere, probinson, aprantl, hiraditya, rupprecht, jdoerfert, llvm-commits

Tags: #llvm, #debug-info

Differential Revision: https://reviews.llvm.org/D58952

llvm-svn: 356265
2019-03-15 15:00:12 +00:00
Mikael Holmen 339daae806 [CodeGenPrepare] avoid crashing from replacing a phi twice
Summary:
This is a fix to bug 41052:
https://bugs.llvm.org/show_bug.cgi?id=41052

While trying to optimize a memory instruction in a dead basic block, we end up registering the same phi for replacement twice. This patch avoids registering more than the first replacement candidate for a phi.

Patch by: JesperAntonsson

Reviewers: skatkov, aprantl

Reviewed By: aprantl

Subscribers: jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59358

llvm-svn: 356260
2019-03-15 13:51:05 +00:00
Sam Parker f82d4ed771 [ARM] Remove EarlyCSE from backend
There is an issue with early CSE hitting an assert, so temporarily
remove the pass from the Arm backend.
    
Bug: https://bugs.llvm.org/show_bug.cgi?id=41081

Differential Revision: https://reviews.llvm.org/D59410

llvm-svn: 356259
2019-03-15 13:36:37 +00:00
Michael Liao 6883d7e192 [AMDGPU] Fix SGPR fixing through SCC chaining
Summary:
- During the fixing of SGPR copying from VGPR, ensure users of SCC is
  properly propagated, i.e.
  * only propagate through live def of SCC,
  * skip the SCC-def inst itself, and
  * stop the propagation on the other SCC-def inst after checking its
    SCC-use first.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59362

llvm-svn: 356258
2019-03-15 12:42:21 +00:00
Florian Hahn 728293ac87 [LSR] Update test from rL356256 after rebase.
llvm-svn: 356257
2019-03-15 12:37:50 +00:00
Florian Hahn d9e88f7b7f [LSR] Check for signed overflow in NarrowSearchSpaceByDetectingSupersets.
We are adding a sign extended IR value to an int64_t, which can cause
signed overflows, as in the attached test case, where we have a formula
with BaseOffset = -1 and a constant with numeric_limits<int64_t>::min().

If the addition would overflow, skip the simplification for this
formula. Note that the target triple is required to trigger the failure.

Reviewers: qcolombet, gilr, kparzysz, efriedma

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D59211

llvm-svn: 356256
2019-03-15 12:17:36 +00:00
Simon Pilgrim 398f9bb434 [SPARC] Regenerate label test for D59363
llvm-svn: 356253
2019-03-15 11:24:17 +00:00
Simon Pilgrim 22bebcbbbf [ARM] Remove icmp undef from reduced tests
Pre-commit for D59363 (Add icmp UNDEF handling to SelectionDAG::FoldSetCC)

Approved by @efriedma (Eli Friedman)

llvm-svn: 356252
2019-03-15 11:14:59 +00:00
Simon Pilgrim 918d0c2ba6 [WebAssembly] Remove icmp undef in stackify test
Pre-commit for D59363 (Add icmp UNDEF handling to SelectionDAG::FoldSetCC)

Approved by @tlively (Thomas Lively)

llvm-svn: 356251
2019-03-15 11:13:26 +00:00
Simon Pilgrim 0ad17402a9 [X86][SSE] Attempt to convert SSE shift-by-var to shift-by-imm.
Prep work for PR40203

llvm-svn: 356249
2019-03-15 11:05:42 +00:00
James Henderson b10f48bbb4 [yaml2obj]Allow explicit setting of p_filesz, p_memsz, and p_offset
yaml2obj currently derives the p_filesz, p_memsz, and p_offset values of
program headers from their sections. This makes writing tests for
certain formats more complex, and sometimes impossible. This patch
allows setting these fields explicitly, overriding the default value,
when relevant.

Reviewed by: jakehehrlich, Higuoxing

Differential Revision: https://reviews.llvm.org/D59372

llvm-svn: 356247
2019-03-15 10:35:27 +00:00
Sam Parker 9e73020bfa [ARM][ParallelDSP] Disable for big-endian
Bail early when we don't have a preheader and also if the target is
big endian because it's written with only little endian in mind!

Differential Revision: https://reviews.llvm.org/D59368

llvm-svn: 356243
2019-03-15 10:19:32 +00:00
Petar Avramovic 3e0da146ac [MIPS GlobalISel] Improve selection of constants
Certain 32 bit constants can be generated with a single instruction
instead of two. Implement materialize32BitImm function for MIPS32.

Differential Revision: https://reviews.llvm.org/D59369

llvm-svn: 356238
2019-03-15 07:07:50 +00:00
Yonghong Song cacac05aca [BPF] do not generate unused local/global types
The kernel currently has a limit for # of types to be 64KB and
the size of string subsection to be 64KB. A simple bcc tool
runqlat.py generates:
  . the size of ~33KB type section, roughly ~10K types
  . the size of ~17KB string section

The majority type is from the types referenced by local
variables in the bpf program. For example, the kernel "task_struct"
itself recursively brings in ~900 other types.
This patch did the following optimization to avoid generating
unused types:
  . do not generate types for local variables unless they are
    function arguments.
  . do not generate types for external globals.

If an external global is not used in the program, llvm
already removes it from IR, so global variable saving is
typical small. For runqlat.py, only one variable "llvm.used"
is the external global.

The types for locals and external globals can be added back
once there is a usage for them.

After the above optimization, the runqlat.py generates:
  . the size of ~1.5KB type section, roughtly 500 types
  . the size of ~0.7KB string section

UPDATE:
  resubmitted the patch after previous revert with
  the following fix:
  use Global.hasExternalLinkage() to test "external"
  linkage instead of using Global.getInitializer(),
  which will assert on external variables.

Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356234
2019-03-15 05:51:25 +00:00
Yonghong Song bf3a279bce Revert "[BPF] do not generate unused local/global types"
This reverts commit r356232.

Reason: test failure with ASSERT on enabled build.
llvm-svn: 356233
2019-03-15 05:02:19 +00:00
Yonghong Song 5664d4c8ca [BPF] do not generate unused local/global types
The kernel currently has a limit for # of types to be 64KB and
the size of string subsection to be 64KB. A simple bcc tool
runqlat.py generates:
  . the size of ~33KB type section, roughly ~10K types
  . the size of ~17KB string section

The majority type is from the types referenced by local
variables in the bpf program. For example, the kernel "task_struct"
itself recursively brings in ~900 other types.
This patch did the following optimization to avoid generating
unused types:
  . do not generate types for local variables unless they are
    function arguments.
  . do not generate types for external globals.

If an external global is not used in the program, llvm
already removes it from IR, so global variable saving is
typical small. For runqlat.py, only one variable "llvm.used"
is the external global.

The types for locals and external globals can be added back
once there is a usage for them.

After the above optimization, the runqlat.py generates:
  . the size of ~1.5KB type section, roughtly 500 types
  . the size of ~0.7KB string section

Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356232
2019-03-15 04:42:01 +00:00
Matt Arsenault 1d83670dbd AMDGPU: Remove intrinsic operand assert
Before r355981, this was under LLVM_DEBUG. I don't think the assert is
quite right, but this really should be a verifier check. Instcombine
should not be asserting on this sort of thing.

llvm-svn: 356219
2019-03-14 23:45:09 +00:00
Sanjay Patel 2c9275a790 [CGP] add another bailout for degenerate code (PR41064)
This is almost the same as:
rL355345
...and should prevent any potential crashing from examples like:
https://bugs.llvm.org/show_bug.cgi?id=41064
...although the bug was masked by:
rL355823
...and I'm not sure how to repro the problem after that change.

llvm-svn: 356218
2019-03-14 23:14:31 +00:00
Paul Robinson 96c1f2cd6c Tighten up tests that use -debugify as a shortcut. NFC
These now verify that a given instruction has a specific source
location, rather than any old location. We want to make sure we
propagate the correct locations from one instruction to another.

llvm-svn: 356217
2019-03-14 23:09:17 +00:00
Eli Friedman fb26c329af [MC] Sort FDEs by the associated CIE before emitting them.
This isn't necessary according to the DWARF standard, but it matches the
.eh_frame sections emitted by other tools in practice, and the Android
libunwindstack rejects .eh_frame sections where an FDE refers to a CIE
other than the closest previous CIE. So match the other tools and also
sort accordingly.

I consider this a bug in libunwindstack, but it's easy enough to emit
a compatible .eh_frame section for compatibility with installed
operating systems.

Differential Revision: https://reviews.llvm.org/D58266

llvm-svn: 356216
2019-03-14 23:08:19 +00:00
Matt Arsenault bc6d07ca46 MIR: Allow targets to serialize MachineFunctionInfo
This has been a very painful missing feature that has made producing
reduced testcases difficult. In particular the various registers
determined for stack access during function lowering were necessary to
avoid undefined register errors in a large percentage of
cases. Implement a subset of the important fields that need to be
preserved for AMDGPU.

Most of the changes are to support targets parsing register fields and
properly reporting errors. The biggest sort-of bug remaining is for
fields that can be initialized from the IR section will be overwritten
by a default initialized machineFunctionInfo section. Another
remaining bug is the machineFunctionInfo section is still printed even
if empty.

llvm-svn: 356215
2019-03-14 22:54:43 +00:00
Jessica Paquette 7d6784f522 [AArch64][GlobalISel] Add isel support for G_UADDO on s32s and s64s
This adds instruction selection support for G_UADDO on s32s and s64s.

Also
- Add an instruction selection test
- Update the arm64-xaluo.ll test to show that we generate the correct assembly

Differential Revision: https://reviews.llvm.org/D58734

llvm-svn: 356214
2019-03-14 22:54:29 +00:00
Amara Emerson d61b89be8d [AArch64][GlobalISel] Implement selection for G_UNMERGE of vectors to vectors.
This re-uses the previous support for extract vector elt to extract the
subvectors.

Differential Revision: https://reviews.llvm.org/D59390

llvm-svn: 356213
2019-03-14 22:48:18 +00:00
Amara Emerson 2ff2298c3e [AArch64][GlobalISel] Add some support for G_CONCAT_VECTORS.
Handles concatenating 2 x v2s32 and 2 x v4s16

Differential Revision: https://reviews.llvm.org/D59390

llvm-svn: 356212
2019-03-14 22:48:15 +00:00
Jordan Rupprecht 12ed01dcf9 [llvm-strip] Hook up (unimplemented) --only-keep-debug
For ELF, we accept but ignore --only-keep-debug. Do the same for llvm-strip.

COFF does implement this, so update the test that it is supported.

llvm-svn: 356207
2019-03-14 21:51:42 +00:00
Adrian Prantl 07b97492d4 Add test I forgot to git-add in r356163.
llvm-svn: 356205
2019-03-14 21:23:52 +00:00
Nikita Popov 48eb21ee5f [InstCombine] Add tests for range-based saturing math overflow; NFC
Tests for cases where overflow can be determined, but not based on
known bits.

llvm-svn: 356203
2019-03-14 21:06:46 +00:00
Pete Couperus 9fd1848823 [ARC] Add more load/store variants.
On ARC ISA, general format of load instruction is this:

    LD<zz><.x><.aa><.di> a, [b,c]
And general format of store is this:
    ST<zz><.aa><.di> c, [b,s9]
Where:

<zz> is data size field and can be one of
  <empty> (bits 00) - Word (32-bit), default behavior
  B             (bits 01) - Byte
  H             (bits 10) - Half-word (16-bit)

 <.x> is data extend mode:
  <empty> (bit 0) - If size is not Word(32-bit), then data is zero extended
  X       (bit 1) - If size is not Word(32-bit), then data is sign extended

 <.aa> is address write-back mode:
  <empty> (bits 00) - no write-back
  .AW  (bits 01) - Preincrement, base register updated pre memory transaction
  .AB  (bits 10) - Postincrement, base register updated post memory transaction

 <.di> is cache bypass mode:
  <empty> (bit 0) - Cached memory access, default mode
  .DI     (bit 1) - Non-cached data memory access

  This patch adds these load/store instruction variants to the ARC backend.

Patch By Denis Antrushin! <denis@synopsys.com>

Differential Revision: https://reviews.llvm.org/D58980

llvm-svn: 356200
2019-03-14 20:50:54 +00:00
Sanjay Patel 38f07b1966 [InstCombine] remove duplicate tests
These got accidentally doubled with rL356191.

llvm-svn: 356195
2019-03-14 19:41:21 +00:00
Sanjay Patel de1d5d3675 [InstCombine] canonicalize funnel shift constant shift amount to be modulo bitwidth
The shift argument is defined to be modulo the bitwidth, so if that argument
is a constant, we can always reduce the constant to its minimal form to allow
better CSE and other follow-on transforms.

We need to be careful to ignore constant expressions here, or we will likely
infinite loop. I'm adding a general vector constant query for that case.

Differential Revision: https://reviews.llvm.org/D59374

llvm-svn: 356192
2019-03-14 19:22:08 +00:00
Sanjay Patel 6e86216531 [InstCombine] add tests for funnel shift constant shift amount mod bitwidth; NFC
llvm-svn: 356191
2019-03-14 19:22:00 +00:00
Philip Reames 81abc7fb0c [Tests] Add tests to demonstrate hoisting of unordered invariant loads
llvm-svn: 356184
2019-03-14 18:06:15 +00:00
Philip Reames 9616cf0510 [Tests] Revert an accident change to a test
llvm-svn: 356183
2019-03-14 18:02:19 +00:00
Jessica Paquette 5aff1f475c [GlobalISel][AArch64] Add partial selection support for G_INSERT_VECTOR_ELT
This adds support for inserting elements into packed vectors. It also adds
two tests: one for selection, and one for regbank select.

Unpacked vectors will come in a follow-up.

Differential Revision: https://reviews.llvm.org/D59325

llvm-svn: 356182
2019-03-14 18:01:30 +00:00
Philip Reames c53f02a32a Auto-generate an existing test to make it easier to update
llvm-svn: 356181
2019-03-14 17:59:59 +00:00
Max Moroz a80d9ce5cf Speeding up llvm-cov export with multithreaded renderFiles implementation.
Summary:
CoverageExporterJson::renderFiles accounts for most of the execution time given a large profdata file with multiple binaries.

Proposed solution is to generate JSON for each file in parallel and sort at the end to preserve deterministic output. Also added flags to skip generating parts of the output to trim the output size.

Patch by Sajjad Mirza (@sajjadm).

Reviewers: Dor1s, vsk

Reviewed By: Dor1s, vsk

Subscribers: liaoyuke, mgrang, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59277

llvm-svn: 356178
2019-03-14 17:49:27 +00:00
Sanjay Patel 43570a0a62 [InstCombine] add tests for funnel shift constant shift amount mod bitwidth; NFC
llvm-svn: 356175
2019-03-14 17:39:40 +00:00
Philip Reames af41b282c5 [Tests] Add tests for reordering of unordered atomics on invariant locations
llvm-svn: 356172
2019-03-14 17:36:58 +00:00
Philip Reames 70d156991c Allow code motion (and thus folding) for atomic (but unordered) memory operands
Building on the work done in D57601, now that we can distinguish between atomic and volatile memory accesses, go ahead and allow code motion of unordered atomics. As seen in the diffs, this allows much better folding of memory operations into using instructions. (Mostly done by the PeepholeOpt pass.)

Note: I have not reviewed all callers of hasOrderedMemoryRef since one of them - isSafeToMove - is very widely used. I'm relying on the documented semantics of each method to judge correctness.

Differential Revision: https://reviews.llvm.org/D59345

llvm-svn: 356170
2019-03-14 17:20:59 +00:00
Philip Reames 8dd9b54d9b [Tests] Add negative folding tests w/fences as requested in D59345
llvm-svn: 356165
2019-03-14 17:05:18 +00:00
Craig Topper c747ac3f93 [X86] Fix the pattern changes from r356121 so that the ROR*r1/ROR*m1 pattern use the rotr opcode.
These instructions used to use rotl with a bitwidth-1 immediate. I changed the immediate to 1,
but failed to change the opcode.

Thankfully this seems to have not caused a functional issue because we now had two rotl by 1 patterns,
but the correct ones were earlier and took priority. So we just missed some optimization.

llvm-svn: 356164
2019-03-14 16:53:24 +00:00
Adrian Prantl e69917f166 Add IR debug info support for Elemental, Pure, and Recursive Procedures.
Patch by Eric Schweitz!

Differential Revision: https://reviews.llvm.org/D54043

llvm-svn: 356163
2019-03-14 16:29:54 +00:00
Sam Parker 0a833d0ad2 [NFC][ARM] Update test
Change some regex to handle commutable instructions. 

llvm-svn: 356159
2019-03-14 15:36:54 +00:00
Sanjay Patel 5d1df114e8 [x86] prevent infinite looping from vselect commutation (PR41066)
This is an immediate fix for:
https://bugs.llvm.org/show_bug.cgi?id=41066
...but as noted there and the code comments, we should do better
by stubbing this out sooner.

llvm-svn: 356158
2019-03-14 15:32:34 +00:00
Matt Arsenault 133716929c GlobalISel: Use multiple returns for intrinsic structs
This is consistent with what SelectionDAG does and is much easier to
work with than the extract sequence with an artificial wide register.

For the AMDGPU control flow intrinsics, this was producing an s128 for
the i64, i1 tuple return. Any legalization that should apply to a real
s128 value would badly obscure the direct values that need to be seen.

llvm-svn: 356147
2019-03-14 14:18:56 +00:00
Matt Arsenault 4e3e4016bf ARM: Add ImmArg to intrinsics
I found these by asserting in clang for any GCCBuiltin that doesn't
require mangling and requires a constant for the builtin. This means
that intrinsics are missing which don't use GCCBuiltin, don't have
builtins defined in clang, or were missing the constant annotation in
the builtin definition.

llvm-svn: 356144
2019-03-14 13:46:14 +00:00
Simon Pilgrim 238a94c4b6 [SystemZ] Remove icmp undef
Prep-work for PR40800 (Add UNDEF handling to SelectionDAG::FoldSetCC)

llvm-svn: 356138
2019-03-14 11:56:41 +00:00
Simon Pilgrim 5bcd59bc84 [SystemZ] Regenerate tests to make complete codegen more obvious
llvm-svn: 356137
2019-03-14 11:54:46 +00:00
James Henderson b5de5e25de [llvm-objcopy]Don't implicitly strip sections in segments
This patch changes llvm-objcopy's behaviour to not strip sections that
are in segments, if they otherwise would be due to a stripping operation
(--strip-all, --strip-sections, --strip-non-alloc). This preserves the
segment contents. It does not change the behaviour of --strip-all-gnu
(although we could choose to do so), because GNU objcopy's behaviour in
this case seems to be to strip the section, nor does it prevent removing
of sections in segments with --remove-section (if a user REALLY wants to
remove a section, we should probably let them, although I could be
persuaded that warning might be appropriate). Tests have been added to
show this latter behaviour.

This fixes https://bugs.llvm.org/show_bug.cgi?id=41006.

Reviewed by: grimar, rupprecht, jakehehrlich

Differential Revision: https://reviews.llvm.org/D59293

This is a reland of r356129, attempting to fix greendragon failures
due to a suspected compatibility issue with od on the greendragon bots
versus other versions.

llvm-svn: 356136
2019-03-14 11:47:41 +00:00
James Henderson e81f5f91b4 Revert r356129 due to greendragon bot failures
llvm-svn: 356133
2019-03-14 11:23:04 +00:00
Sam Parker 4c4ff13d3c [ARM][ParallelDSP] Enable multiple uses of loads
When choosing whether a pair of loads can be combined into a single
wide load, we check that the load only has a sext user and that sext
also only has one user. But this can prevent the transformation in
the cases when parallel macs use the same loaded data multiple times.
    
To enable this, we need to fix up any other uses after creating the
wide load: generating a trunc and a shift + trunc pair to recreate
the narrow values. We also need to keep a record of which loads have
already been widened.

Differential Revision: https://reviews.llvm.org/D59215

llvm-svn: 356132
2019-03-14 11:14:13 +00:00
Sam Parker 3b2ba20afd [ARM] Run ARMParallelDSP in the IRPasses phase
Run EarlyCSE before ParallelDSP and do this in the backend IR opt
phase.

Differential Revision: https://reviews.llvm.org/D59257

llvm-svn: 356130
2019-03-14 10:57:40 +00:00
James Henderson c03a95d465 [llvm-objcopy]Don't implicitly strip sections in segments
This patch changes llvm-objcopy's behaviour to not strip sections that
are in segments, if they otherwise would be due to a stripping operation
(--strip-all, --strip-sections, --strip-non-alloc). This preserves the
segment contents. It does not change the behaviour of --strip-all-gnu
(although we could choose to do so), because GNU objcopy's behaviour in
this case seems to be to strip the section, nor does it prevent removing
of sections in segments with --remove-section (if a user REALLY wants to
remove a section, we should probably let them, although I could be
persuaded that warning might be appropriate). Tests have been added to
show this latter behaviour.

This fixes https://bugs.llvm.org/show_bug.cgi?id=41006.

Reviewed by: grimar, rupprecht, jakehehrlich

Differential Revision: https://reviews.llvm.org/D59293

llvm-svn: 356129
2019-03-14 10:20:27 +00:00
Alex Bradbury d08ed38e08 [RISCV] Extend test/CodeGen/RISCV/callee-saved-* to test getCalleePreservedRegs
Add a caller which exhausts regs then calls another function. This allows
getCalleePreservedRegs to be tested.

llvm-svn: 356122
2019-03-14 08:17:44 +00:00
Craig Topper 54a0b53308 [X86] Add patterns for rotr by immediate to fix PR41057.
Prior to the introduction of funnel shift intrinsics we could count on rotate
by immediates prefering to use rotl since that's what MatchRotate would check
first. The or+shift pattern doesn't have a direction so one must be chosen
arbitrarily.

With funnel shift, there is a direction and fshr will try to use rotr first.
While fshl will try to use rotl first.

This patch adds the isel patterns for rotr to complement the rotl patterns. I've
put the rotr by 1 patterns in the instruction patterns. And moved the rotl by
bitwidth-1 patterns to separate Pat patterns.

Fixes PR41057.

llvm-svn: 356121
2019-03-14 07:07:26 +00:00
Craig Topper c867847016 [X86] Add various test cases for PR41057. NFC
llvm-svn: 356120
2019-03-14 07:07:24 +00:00
Quentin Colombet e77e5f44b8 [GlobalISel][Utils] Add a getConstantVRegVal variant that looks through instrs
getConstantVRegVal used to only look for G_CONSTANT when looking at
unboxing the value of a vreg. However, constants are sometimes not
directly used and are hidden behind trunc, s|zext or copy chain of
computation.

In particular this may be introduced by the legalization process that
doesn't want to simplify these patterns because it can lead to infine
loop when legalizing a constant.

To circumvent that problem, add a new variant of getConstantVRegVal,
named getConstantVRegValWithLookThrough, that allow to look through
extensions.

Differential Revision: https://reviews.llvm.org/D59227

llvm-svn: 356116
2019-03-14 01:37:13 +00:00
Douglas Yung 591040adc2 Fixup tests to check for any MCInst number instead of a specific one.
llvm-svn: 356115
2019-03-14 01:24:35 +00:00
Craig Topper fad96a1588 [X86] Add 64-bit mode command lines to rot32.ll so that it will demonstrate PR41055 for 32 bit. NFC
llvm-svn: 356112
2019-03-14 00:23:31 +00:00
Jordan Rupprecht 42bc1e241c [llvm-objcopy] Cleanup errors from CopyConfig and remove llvm-objcopy.h dependency
error() was previously cleaned up from CopyConfig, but new uses were introduced.

This also tweaks the error message for --add-symbol to report all invalid flags.

llvm-svn: 356105
2019-03-13 22:26:01 +00:00
Matt Arsenault 0253620f89 Verifier: Make sure masked load/store alignment is a power of 2
The same should also be done for scatter/gather, but the verifier
doesn't check those at all now.

llvm-svn: 356094
2019-03-13 19:46:34 +00:00
Matt Arsenault 24e249ec01 SystemZ: Add ImmArg to intrinsics
I found these by asserting in clang for any GCCBuiltin that doesn't
require mangling and requires a constant for the builtin. This means
that intrinsics are missing which don't use GCCBuiltin, don't have
builtins defined in clang, or were missing the constant annotation in
the builtin definition.

llvm-svn: 356091
2019-03-13 19:46:32 +00:00
Matt Arsenault 88dc015a92 Mips: Add ImmArg to intrinsics
I found these by asserting in clang for any GCCBuiltin that doesn't
require mangling and requires a constant for the builtin. This means
that intrinsics are missing which don't use GCCBuiltin, don't have
builtins defined in clang, or were missing the constant annotation in
the builtin definition.

I'm not sure what's going on with the immediates.ll test. It seems to
be intended to test invalid cases like this, but then tries to handle
some of them anyway. I've moved the cases that were inconsistent with
the GCCBuiltin definition so they don't test the codegen anymore.

llvm-svn: 356085
2019-03-13 19:07:59 +00:00
Simon Pilgrim e15cd7909b [X86] Remove icmp undef in more reduced tests
llvm-svn: 356084
2019-03-13 19:07:54 +00:00
Simon Pilgrim 8f1b825068 [X86] Regenerate tail call tests
llvm-svn: 356083
2019-03-13 19:04:45 +00:00
Tim Renouf ed0b9af997 [AMDGPU] Switched HSA metadata to use MsgPackDocument
Summary:
MsgPackDocument is the lighter-weight replacement for MsgPackTypes. This
commit switches AMDGPU HSA metadata processing to use MsgPackDocument
instead of MsgPackTypes.

Differential Revision: https://reviews.llvm.org/D57024

Change-Id: I0751668013abe8c87db01db1170831a76079b3a6
llvm-svn: 356081
2019-03-13 18:55:50 +00:00
Craig Topper 84abec2855 [X86] Check for 64-bit mode in X86Subtarget::hasCmpxchg16b()
The feature flag alone can't be trusted since it can be passed via -mattr. Need to ensure 64-bit mode as well.

We had a 64 bit mode check on the instruction to make the assembler work correctly. But we weren't guarding any of our lowering code or the hooks for the AtomicExpandPass.

I've added 32-bit command lines to atomic128.ll with and without cx16. The tests there would all previously fail if -mattr=cx16 was passed to them. I had to move one test case for f128 to a new file as it seems to have a different 32-bit mode or possibly sse issue.

Differential Revision: https://reviews.llvm.org/D59308

llvm-svn: 356078
2019-03-13 18:48:50 +00:00
Simon Pilgrim e1be3403ff [X86] Avoid icmp undef in reduced tests
Because we don't currently simplify icmp with undef in DAG, bugpoint loves to introduce them during reduction.

This is a small step towards re-adding non-undef values into some of the simpler tests so that they should still test correctly and emit similar/same codegen.

Prep work for PR40800 ([SelectionDAG] Add UNDEF handling to SelectionDAG::FoldSetCC).

llvm-svn: 356076
2019-03-13 18:36:59 +00:00
Alex Bradbury bd1c56648f [RISCV] Regenerate test/CodeGen/RISCV/legalize-fneg.ll after rL356068
rL356068 caused some minor re-orderings. Regenerate legalize-fneg.ll to
reflect this, and remove the NOLIB check lines (they're redundant given that
the RV32I and RV64I check lines generated by update_llc_test_checks.py already
demonstrate there is no libcall).

llvm-svn: 356074
2019-03-13 18:25:23 +00:00
Simon Pilgrim 510f26dca8 Regenerate test
llvm-svn: 356071
2019-03-13 18:18:24 +00:00
Nirav Dave d6351340bb [DAGCombiner] If a TokenFactor would be merged into its user, consider the user later.
Summary:
A number of optimizations are inhibited by single-use TokenFactors not
being merged into the TokenFactor using it. This makes we consider if
we can do the merge immediately.

Most tests changes here are due to the change in visitation causing
minor reorderings and associated reassociation of paired memory
operations.

CodeGen tests with non-reordering changes:

  X86/aligned-variadic.ll -- memory-based add folded into stored leaq
  value.

  X86/constant-combiners.ll -- Optimizes out overlap between stores.

  X86/pr40631_deadstore_elision -- folds constant byte store into
  preceding quad word constant store.

Reviewers: RKSimon, craig.topper, spatel, efriedma, courbet

Reviewed By: courbet

Subscribers: dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, eraman, hiraditya, kbarton, jrtc27, atanasyan, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59260

llvm-svn: 356068
2019-03-13 17:07:09 +00:00
Simon Pilgrim bef4fe056d [X86][AVX] Add X86ISD::VTRUNC handling to SimplifyDemandedVectorEltsForTargetNode
llvm-svn: 356067
2019-03-13 17:00:18 +00:00
Simon Pilgrim d9aa879b67 [X86][AVX] Add combineConcatVectors support to improve subvector handling
Attempt to combine CONCAT_VECTORS nodes, which we only really have pre-legalization.

This encourages a lot of X86ISD::SUBV_BROADCAST generation, so I've added SimplifyDemandedVectorEltsForTargetNode handling for this at the same time.

The X86ISD::VTRUNC regression in shuffle-vs-trunc-256-widen.ll will be handled in a future commit.

llvm-svn: 356064
2019-03-13 16:37:30 +00:00
Alex Bradbury 8a70468a27 [RISCV] Only mark fp as reserved if the function has a dedicated frame pointer
This follows similar logic in the ARM and Mips backends, and allows the free
use of s0 in functions without a dedicated frame pointer. The changes in
callee-saved-gprs.ll most clearly show the effect of this patch.

llvm-svn: 356063
2019-03-13 16:33:45 +00:00
Alex Bradbury 7d546aba6c [RISCV] Add tests for callee-saved GPRs, FPR32s, and FPR64s
Note that s0 need not be marked reserved if the frame pointer isn't used. For
the ILP32 and LP64 soft float ABIS that are currently support, all FPRs are
always considered temporaries.

llvm-svn: 356061
2019-03-13 16:14:16 +00:00
Sander de Smalen 72fc7b842c [AArch64] Add test/CodeGen/AArch64/vecreduce-fadd.ll
This test is added to see difference created by:

  https://reviews.llvm.org/D59259

llvm-svn: 356054
2019-03-13 15:18:27 +00:00
Sanjay Patel 0a251e4076 [x86] limit extractelement of setcc to pre-legalization
A fuzzer found the crasher:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=13700

The bug was introduced recently here:
rL355741

This is the quick fix. If we need to do this transform
later, then we'd have to extend/truncate the vector setcc
element type to the scalar setcc type (i8). 

llvm-svn: 356053
2019-03-13 14:49:52 +00:00
Simon Atanasyan 9bfd140ddb [mips] Fix encoding of the `mov.d` command for microMIPS R6
Before this change LLVM emits non-microMIPS variant of the `mov.d`
command for microMIPS code.

Differential Revision: http://reviews.llvm.org/D59045

llvm-svn: 356052
2019-03-13 14:23:12 +00:00
Clement Courbet 3bb5d0bb9b Re-land r354244 "[DAGCombiner] Eliminate dead stores to stack."
Always check candidates for hasOtherUses(), not only stores.

llvm-svn: 356050
2019-03-13 13:56:23 +00:00
Simon Atanasyan b9d9e0be3c [mips] Map SW instruction to its microMIPS R6 variant
To provide mapping between standard and microMIPS R6 variants of the
`sw` command we have to rename SWSP_xxx commands from "sw" to "swsp".
Otherwise `tablegen` starts to show the error `Multiple matches found
for `SW'`. After that to restore printing SWSP command as `sw`, I add
an appropriate `MipsInstAlias` instance.

We also need to implement "size reduction" for microMIPS R6. But this
task is for separate patch. After that the `micromips-lwsp-swsp.ll` test
case will be extended.

Differential Revision: http://reviews.llvm.org/D59046

llvm-svn: 356045
2019-03-13 13:09:30 +00:00
Alex Bradbury 192df587d1 [RISCV] Regenerate umulo-128-legalisation-lowering.ll
Upstream changes have improved codegen, reducing stack usage. Regenerate the test.

llvm-svn: 356044
2019-03-13 12:33:44 +00:00
Simon Pilgrim 7abbd70300 [X86][AVX] lowerShuffleAsBroadcast - improve load folding by avoiding bitcasts
AVX1 broadcasts were failing as we were adding bitcasts that caused MayFoldLoad's hasOneUse to return false.

This patch stops introducing bitcasts so early and also replaces the broadcast index scaling through bitcasts (which can't succeed in some cases) to instead just keep track of the bitoffset which can be converted back to the broadcast index later on.

Differential Revision: https://reviews.llvm.org/D58888

llvm-svn: 356043
2019-03-13 12:20:39 +00:00
Simon Pilgrim 360ce82db2 [DAG] Move integer setcc %x, %x folding into FoldSetCC
First step towards PR40800 - I intend to move the float case in a separate future patch.

I had to tweak the (overly reduced) thumb2 test and the x86 widening test change is annoying (no longer rematerializable) but we should address this separately.

Differential Revision: https://reviews.llvm.org/D59244

llvm-svn: 356040
2019-03-13 11:08:57 +00:00
Simon Atanasyan c2b975a75c [MIPS][microMIPS] Fix PseudoMTLOHI_MM matching and expansion
On micromips MipsMTLOHI is always matched to PseudoMTLOHI_DSP regardless
of +dsp argument. This patch checks is HasDSP predicate is present for
PseudoMTLOHI_DSP so PseudoMTLOHI_MM can be matched when appropriate.

Add expansion of PseudoMTLOHI_MM instruction into a mtlo/mthi pair.

Patch by Mirko Brkusanin.

Differential Revision: http://reviews.llvm.org/D59203

llvm-svn: 356039
2019-03-13 11:04:38 +00:00
Simon Atanasyan c711002041 [mips] Fix CPU used in the test case to suppress warning. NFC
The MSA ASE used in in the test case requires MIPS32 revision 5 or
greater while the test uses MIPS32 revision 1.

llvm-svn: 356038
2019-03-13 11:04:28 +00:00
Jonas Hahnfeld c64d73cce2 [ELF] Fix GCC8 warnings about "fall through", NFCI
Add break statements in Object/ELF.cpp since the code should consider the
generic tags for Hexagon, MIPS, and PPC. Add a test (copied from llvm-readobj)
to show that this works correctly (earlier versions of this patch would have
asserted).

The warnings in X86ELFObjectWriter.cpp are actually false-positives since
the nested switch() handles all possible values and returns in all cases.
Make this explicit by adding llvm_unreachable's.

Differential Revision: https://reviews.llvm.org/D58837

llvm-svn: 356037
2019-03-13 10:38:17 +00:00
Philip Reames 21a50ccf9c [ImplicitNullChecks] Support unordered atomic accesses
Update the INC pass to allow folding unordered atomics.  This is the first optimization unblocked by the changes landed from D57601.

llvm-svn: 356006
2019-03-13 03:25:20 +00:00
Philip Reames 80ccc88869 [Tests] Expand implicit null check coverage
llvm-svn: 356004
2019-03-13 03:17:58 +00:00
Evgeniy Stepanov 6e64a14804 Revert "[llvm] Skip over empty line table entries."
This reverts commit r355972.
See the discussion at https://reviews.llvm.org/D58952.

llvm-svn: 356001
2019-03-13 01:37:58 +00:00
Craig Topper 750efba67c [X86] Enable printAliasInstr for the Intel assembly printer so that AAM and AAD will print without an immediate when the immediate is 10.
llvm-svn: 355997
2019-03-13 00:43:03 +00:00
Heejin Ahn 8b49b6bed6 [WebAssembly] Place 'try' and 'catch' correctly wrt EH_LABELs
Summary:
After instruction selection phase, possibly-throwing calls, which were
previously invoke, are wrapped in `EH_LABEL` instructions. For example:
```
  EH_LABEL <mcsymbol .Ltmp0>
  CALL_VOID @foo ...
  EH_LABEL <mcsymbol .Ltmp1>
```

`EH_LABEL` is placed also in the beginning of EH pads:
```
bb.1 (landing-pad):
  EH_LABEL <mcsymbol .Ltmp2>
  ...
```

And we'd like to maintian this relationship, so when we place a `try`,
```
  TRY ...
  EH_LABEL <mcsymbol .Ltmp0>
  CALL_VOID @foo ...
  EH_LABEL <mcsymbol .Ltmp1>
```

When we place a `catch`,
```
bb.1 (landing-pad):
  EH_LABEL <mcsymbol .Ltmp2>
  %0:except_ref = CATCH ...
  ...
```

Previously we didn't treat EH_LABELs specially, so `try` was placed
right before a call, and `catch` was placed in the beginning of an EH
pad.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58914

llvm-svn: 355996
2019-03-13 00:37:31 +00:00
Craig Topper 9bae5ba076 [X86] Add ImmArg markings to intrinsics.
Remove test cases that checked for not crashing when immediate operands were passed not an immediate. These are now considered ill-formed in IR.

This was done by manually scanning the intrinsic file for llvm_i32_ty and llvm_i8_ty which are the predominant types we use for immediates. Most of them are on vector intrinsics. I might have missed some other intrinsics.

Differential Revision: https://reviews.llvm.org/D58302

llvm-svn: 355993
2019-03-12 23:48:07 +00:00
Francis Visoiu Mistrih dd42236c6c Reland "[Remarks] Add -foptimization-record-passes to filter remark emission"
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.

This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`

will only emit the remarks coming from the pass `inline`.

This adds:

* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin

Differential Revision: https://reviews.llvm.org/D59268

Original llvm-svn: 355964

llvm-svn: 355984
2019-03-12 21:22:27 +00:00
Philip Reames b760558517 [Test] Add tests for implicit null checks on atomic/volatile instructions
llvm-svn: 355983
2019-03-12 21:09:58 +00:00
Philip Reames 9134f84ba4 For faulting ops, include a comment w/the fault destination
A faulting_op is one that has specified behavior when a fault occurs, generally redirecting control flow to another location.  This change just adds a comment to the assembly output which makes it both human readable, and machine checkable w/o having to parse the FaultMap section.  This is used to split a test file into two parts, so that I can (in a near future commit) easily extend the test file to demonstrate another case.

llvm-svn: 355982
2019-03-12 21:05:31 +00:00
Matt Arsenault caf1316f71 IR: Add immarg attribute
This indicates an intrinsic parameter is required to be a constant,
and should not be replaced with a non-constant value.

Add the attribute to all AMDGPU and generic intrinsics that comments
indicate it should apply to. I scanned other target intrinsics, but I
don't see any obvious comments indicating which arguments are intended
to be only immediates.

This breaks one questionable testcase for the autoupgrade. I'm unclear
on whether the autoupgrade is supposed to really handle declarations
which were never valid. The verifier fails because the attributes now
refer to a parameter past the end of the argument list.

llvm-svn: 355981
2019-03-12 21:02:54 +00:00
Francis Visoiu Mistrih 1d6c47ad2b Revert "[Remarks] Add -foptimization-record-passes to filter remark emission"
This reverts commit 20fff32b7d.

llvm-svn: 355976
2019-03-12 20:54:18 +00:00
Mircea Trofin 0c29402eb4 [llvm] Skip over empty line table entries.
Summary:
This is similar to how addr2line handles consecutive entries with the
same address - pick the last one.

Reviewers: dblaikie, friss, JDevlieghere

Reviewed By: dblaikie

Subscribers: ormris, echristo, JDevlieghere, probinson, aprantl, hiraditya, rupprecht, jdoerfert, llvm-commits

Tags: #llvm, #debug-info

Differential Revision: https://reviews.llvm.org/D58952

llvm-svn: 355972
2019-03-12 20:48:45 +00:00
Francis Visoiu Mistrih 20fff32b7d [Remarks] Add -foptimization-record-passes to filter remark emission
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.

This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`

will only emit the remarks coming from the pass `inline`.

This adds:

* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin

Differential Revision: https://reviews.llvm.org/D59268

llvm-svn: 355964
2019-03-12 20:28:50 +00:00
Philip Reames 9b6b4fac83 [SROA] Fix a crash when trying to convert a memset to an non-integral pointer type
The included test case currently crashes on tip of tree. Rather than adding a bailout, I chose to restructure the code so that the existing helper function could be used. Given that, the majority of the diff is NFC-ish, but the key difference is that canConvertValue returns false when only one side is a non-integral pointer.

Thanks to Cherry Zhang for the test case.

Differential Revision: https://reviews.llvm.org/D59000

llvm-svn: 355962
2019-03-12 20:15:05 +00:00
Sanjay Patel 737c27a9cd [x86] scalarize extractelement 0 of FP vselect
llvm-svn: 355955
2019-03-12 19:20:45 +00:00
Teresa Johnson 4ab0a9f0a4 [SCEV] Use depth limit for trunc analysis
Summary:
This fixes an extremely long compile time caused by recursive analysis
of truncs, which were not previously subject to any depth limits unlike
some of the other ops. I decided to use the same control used for
sext/zext, since the routines analyzing these are sometimes mutually
recursive with the trunc analysis.

Reviewers: mkazantsev, sanjoy

Subscribers: sanjoy, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58994

llvm-svn: 355949
2019-03-12 18:28:05 +00:00
Jinsong Ji 9dc2c1d564 Set useful flags for vector imm setting instructions
Vector imm setting instructions like XXLXORz/XXLXORspz/XXLXORdpz
Should behave like LI8.

We should set corresponding flags to allow rematerialization and other
opts in LICM, RA, Scheduling etc.

Differential Revision: https://reviews.llvm.org/D58645

llvm-svn: 355948
2019-03-12 18:27:09 +00:00
Craig Topper 03e93f514a [SanitizerCoverage] Avoid splitting critical edges when destination is a basic block containing unreachable
This patch adds a new option to SplitAllCriticalEdges and uses it to avoid splitting critical edges when the destination basic block ends with unreachable. Otherwise if we split the critical edge, sanitizer coverage will instrument the new block that gets inserted for the split. But since this block itself shouldn't be reachable this is pointless. These basic blocks will stick around and generate assembly, but they don't end in sane control flow and might get placed at the end of the function. This makes it look like one function has code that flows into the next function.

This showed up while compiling the linux kernel with clang. The kernel has a tool called objtool that detected the code that appeared to flow from one function to the next. https://github.com/ClangBuiltLinux/linux/issues/351#issuecomment-461698884

Differential Revision: https://reviews.llvm.org/D57982

llvm-svn: 355947
2019-03-12 18:20:25 +00:00
Eli Friedman 74b6aae4e8 [RISCV][MC] Find matching pcrel_hi fixup in more cases.
If a symbol points to the end of a fragment, instead of searching for
fixups in that fragment, search in the next fragment.

Fixes spurious assembler error with subtarget change next to "la"
pseudo-instruction, or expanded equivalent.

Alternate proposal to fix the problem discussed in
https://reviews.llvm.org/D58759.

Testcase by Ana Pazos.

Differential Revision: https://reviews.llvm.org/D58943

llvm-svn: 355946
2019-03-12 18:14:16 +00:00
Jinsong Ji b6bfcfc847 [NFC][PowerPC] Update testcases using utils/update_llc_test_checks.py
llvm-svn: 355945
2019-03-12 17:55:32 +00:00
Jason Liu 8cf8bb1313 Test commit: add a blank line in test case ppc64-dq-expr.s
llvm-svn: 355942
2019-03-12 17:33:07 +00:00
James Henderson 9bc817a0ae [yaml2obj]Allow explicit symbol indexes in relocations and emit error for bad names
Prior to this change, the "Symbol" field of a relocation would always be
assumed to be a symbol name, and if no such symbol existed, the
relocation would reference index 0. This confused me when I tried to use
a literal symbol index in the field: since "0x1" was not a known symbol
name, the symbol index was set as 0. This change falls back to treating
unknown symbol names as integers, and emits an error if the name is not
found and the string is not an integer.

Note that the Symbol field is optional, so if a relocation doesn't
reference a symbol, it shouldn't be specified. The new error required a
number of test updates.

Reviewed by: grimar, ruiu
Differential Revision: https://reviews.llvm.org/D58510

llvm-svn: 355938
2019-03-12 17:00:25 +00:00
Nikita Popov 149bc099f6 [SDAG] Expand pow2 mulo using shifts
Expand MULO with constant power of two operand into a shift. The
overflow is checked with (x << shift) >> shift == x, where the right
shift will be logical for umulo and arithmetic for smulo (with
exception for multiplications by signed_min).

Differential Revision: https://reviews.llvm.org/D59041

llvm-svn: 355937
2019-03-12 16:57:25 +00:00
Simon Pilgrim a6013c0286 Regenerate sign_extend.ll test.
This will change as part of the fix for the regressions in D58017.

llvm-svn: 355933
2019-03-12 16:00:59 +00:00
James Henderson b69a50115b [llvm-cxxfilt]Add test to show that empty lines can be handled
I recently discovered a bug in llvm-cxxfilt introduced in r353743 but
was fixed later incidentally due to r355031. Specifically, llvm-cxxfilt
was attempting to call .back() on an empty string any time there was a
new line in the input. This was causing a crash in my debug builds only.
This patch simply adds a test that explicitly tests that llvm-cxxfilt
handles empty lines correctly. It may pass under release builds under
the broken behaviour, but it fails at least in debug builds.

Reviewed by: mattd

Differential Revision: https://reviews.llvm.org/D58785

llvm-svn: 355929
2019-03-12 15:42:38 +00:00
James Henderson 662c043628 [FileCheck]Remove assertions that prevent matching an empty string at file start before CHECK-NEXT/SAME
This patch removes two assertions that were preventing writing of a test
that checked an empty line followed by some text. For example:

CHECK: {{^$}}
CHECK-NEXT: foo()

The assertion was because the current location the CHECK-NEXT was
scanning from was the start of the buffer. A similar issue occurred with
CHECK-SAME. These assertions don't protect against anything, as there is
already an error check that checks that CHECK-NEXT/EMPTY/SAME don't
appear first in the checks, and the following code works fine if the
pointer is at the start of the input.

Reviewed by: probinson, thopre, jdenny
Differential Revision: https://reviews.llvm.org/D58784

llvm-svn: 355928
2019-03-12 15:37:34 +00:00
Tim Northover 8935aca9c7 CodeGenPrep: preserve inbounds attribute when sinking GEPs.
Targets can potentially emit more efficient code if they know address
computations never overflow. For example ILP32 code on AArch64 (which only has
64-bit address computation) can ignore the possibility of overflow with this
extra information.

llvm-svn: 355926
2019-03-12 15:22:23 +00:00
Xing GUO eec3206a41 [llvm-readobj] Print symbol version when dumping relocations (PR31564)
Summary: This helps resolve https://bugs.llvm.org/show_bug.cgi?id=31564

Reviewers: jhenderson, grimar

Reviewed By: jhenderson

Subscribers: rupprecht, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59175

llvm-svn: 355922
2019-03-12 14:30:13 +00:00
Sam Parker a7ae60ac93 [ARM][NFC] Delete original smlad tests
Because I don't understand svn.

llvm-svn: 355908
2019-03-12 11:06:15 +00:00
Sam Parker 28e46e58db [ARM][NFC] Move smlad tests
Created a test/CodeGen/ARM/ParallelDSP folder.

llvm-svn: 355907
2019-03-12 11:01:11 +00:00
Fangrui Song f260967055 [SimplifyLibCalls] Fix comments about fputs, memchr, and s[n]printf. NFC
llvm-svn: 355905
2019-03-12 10:31:52 +00:00
Eugene Leviant 1e249caaec [CGP] Fix UB when GEP is bound to trivial PHINode
Differential revision: https://reviews.llvm.org/D59140

llvm-svn: 355904
2019-03-12 10:10:29 +00:00
David Stuttard 20ea21c6ed [AMDGPU] Add support for immediate operand for S_ENDPGM
Summary:
Add support for immediate operand in S_ENDPGM

Change-Id: I0c56a076a10980f719fb2a8f16407e9c301013f6

Reviewers: alexshap

Subscribers: qcolombet, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, eraman, arphaman, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59213

llvm-svn: 355902
2019-03-12 09:52:58 +00:00
Simon Tatham cdb7c31f0a [TableGen] Allow 2^63-1 and 2^63-2 as int literals.
These two values correspond to the 'Empty' and 'Tombstone' special
keys defined by DenseMapInfo<int64_t>, which means that neither one
can be used as a key in DenseMap<int64_t, anything>. Hence, if you try
to use either of those values as an int literal, IntInit::get() fails
an assertion when it tries to insert them into its static cache of
int-literal objects.

Fixed by replacing the DenseMap with a std::map, which doesn't intrude
on the space of legal values of the key type.

Reviewers: nhaehnle, hfinkel, javedabsar, efriedma

Reviewed By: efriedma

Subscribers: fhahn, efriedma, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59016

llvm-svn: 355900
2019-03-12 09:28:19 +00:00
Alex Bradbury c965d21f33 [RISCV] Add test cases for the lp64 ABI
These are closely modeled on similar tests for the ilp32 ABI. Like those
tests, we group together tests that should be common cross lp64, lp64+lp64f,
and lp64+lp64f+lp64d ABIs.

llvm-svn: 355899
2019-03-12 09:26:53 +00:00
Sanjoy Das 3f5ce18658 Reland "Relax constraints for reduction vectorization"
Change from original commit: move test (that uses an X86 triple) into the X86
subdirectory.

Original description:
Gating vectorizing reductions on *all* fastmath flags seems unnecessary;
`reassoc` should be sufficient.

Reviewers: tvvikram, mkuper, kristof.beyls, sdesmalen, Ayal

Reviewed By: sdesmalen

Subscribers: dcaballe, huntergr, jmolloy, mcrosier, jlebar, bixia, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D57728

llvm-svn: 355889
2019-03-12 01:31:44 +00:00
Nathan Lanza cc51dc649a Add Swift enumerator value for CodeView::SourceLanguage
Summary:
Swift now generates PDBs for debugging on Windows. llvm and lldb
need a language enumerator value too properly handle the output
emitted by swiftc.

Subscribers: jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59231

llvm-svn: 355882
2019-03-11 23:27:59 +00:00
Sanjoy Das 2136a5bc49 Revert "Relax constraints for reduction vectorization"
This reverts commit r355868.  Breaks hexagon.

llvm-svn: 355873
2019-03-11 22:37:31 +00:00
Jessica Paquette 607774c960 Recommit "[GlobalISel][AArch64] Add selection support for G_EXTRACT_VECTOR_ELT"
After r355865, we should be able to safely select G_EXTRACT_VECTOR_ELT without
running into any problematic intrinsics.

Also add a fix for lane copies, which don't support index 0.

llvm-svn: 355871
2019-03-11 22:18:01 +00:00
Evgeniy Stepanov aedec3f684 Remove ASan asm instrumentation.
Summary: It is incomplete and has no users AFAIK.

Reviewers: pcc, vitalybuka

Subscribers: srhines, kubamracek, mgorny, krytarowski, eraman, hiraditya, jdoerfert, #sanitizers, llvm-commits, thakis

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D59154

llvm-svn: 355870
2019-03-11 21:50:10 +00:00
Alex Bradbury 4d20cc21c7 [RISCV] Do a sign-extension in a compare-and-swap of 32 bit in RV64A
AtomicCmpSwapWithSuccess is legalised into an AtomicCmpSwap plus a comparison.
This requires an extension of the value which, by default, is a
zero-extension. When we later lower AtomicCmpSwap into a PseudoCmpXchg32 and then expanded in
RISCVExpandPseudoInsts.cpp, the lr.w instruction does a sign-extension.

This mismatch of extensions causes the comparison to fail when the compared
value is negative. This change overrides TargetLowering::getExtendForAtomicOps
for RISC-V so it does a sign-extension instead.

Differential Revision: https://reviews.llvm.org/D58829
Patch by Ferran Pallarès Roca.

llvm-svn: 355869
2019-03-11 21:41:22 +00:00
Sanjoy Das 93f8cc186a Relax constraints for reduction vectorization
Summary:
Gating vectorizing reductions on *all* fastmath flags seems unnecessary;
`reassoc` should be sufficient.

Reviewers: tvvikram, mkuper, kristof.beyls, sdesmalen, Ayal

Reviewed By: sdesmalen

Subscribers: dcaballe, huntergr, jmolloy, mcrosier, jlebar, bixia, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D57728

llvm-svn: 355868
2019-03-11 21:36:41 +00:00
Alex Bradbury b6d322bdc2 [RISCV] Allow fp as an alias of s0
The RISC-V Assembly Programmer's Manual defines fp as another alias of x8.
However, our tablegen rules only recognise s0. This patch adds fp as another
alias of x8. GCC also accepts fp.

Differential Revision: https://reviews.llvm.org/D59209
Patch by Ferran Pallarès Roca.

llvm-svn: 355867
2019-03-11 21:35:26 +00:00
Jessica Paquette 42d16501e6 [GlobalISel][AArch64] Always fall back on aarch64.neon.addp.*
Overloaded intrinsics aren't necessarily safe for instruction selection. One
such intrinsic is aarch64.neon.addp.*.

This is a temporary workaround to ensure that we always fall back on that
intrinsic. Eventually this will be replaced with a proper solution.

https://bugs.llvm.org/show_bug.cgi?id=40968

Differential Revision: https://reviews.llvm.org/D59062

llvm-svn: 355865
2019-03-11 20:51:17 +00:00
Nico Weber 885b790f89 Remove esan.
It hasn't seen active development in years, and it hasn't reached a
state where it was useful.

Remove the code until someone is interested in working on it again.

Differential Revision: https://reviews.llvm.org/D59133

llvm-svn: 355862
2019-03-11 20:23:40 +00:00
Nikita Popov aa7cfa75f9 [SDAG][AArch64] Legalize VECREDUCE
Fixes https://bugs.llvm.org/show_bug.cgi?id=36796.

Implement basic legalizations (PromoteIntRes, PromoteIntOp,
ExpandIntRes, ScalarizeVecOp, WidenVecOp) for VECREDUCE opcodes.
There are more legalizations missing (esp float legalizations),
but there's no way to test them right now, so I'm not adding them.

This also includes a few more changes to make this work somewhat
reasonably:

 * Add support for expanding VECREDUCE in SDAG. Usually
   experimental.vector.reduce is expanded prior to codegen, but if the
   target does have native vector reduce, it may of course still be
   necessary to expand due to legalization issues. This uses a shuffle
   reduction if possible, followed by a naive scalar reduction.
 * Allow the result type of integer VECREDUCE to be larger than the
   vector element type. For example we need to be able to reduce a v8i8
   into an (nominally) i32 result type on AArch64.
 * Use the vector operand type rather than the scalar result type to
   determine the action, so we can control exactly which vector types are
   supported. Also change the legalize vector op code to handle
   operations that only have vector operands, but no vector results, as
   is the case for VECREDUCE.
 * Default VECREDUCE to Expand. On AArch64 (only target using VECREDUCE),
   explicitly specify for which vector types the reductions are supported.

This does not handle anything related to VECREDUCE_STRICT_*.

Differential Revision: https://reviews.llvm.org/D58015

llvm-svn: 355860
2019-03-11 20:22:13 +00:00
Jonas Paulsson 8b8dc50e79 [RegAlloc] Avoid compile time regression with multiple copy hints.
As a fix for https://bugs.llvm.org/show_bug.cgi?id=40986 ("excessive compile
time building opencollada"), this patch makes sure that no phys reg is hinted
more than once from getRegAllocationHints().

This handles the case were many virtual registers are assigned to the same
physreg. The previous compile time fix (r343686) in weightCalcHelper() only
made sure that physical/virtual registers are passed no more than once to
addRegAllocationHint().

Review: Dimitry Andric, Quentin Colombet
https://reviews.llvm.org/D59201

llvm-svn: 355854
2019-03-11 19:00:37 +00:00
Brian Gesiak d7b68132d8 [coroutines][PR40979] Ignore unreachable uses across suspend points
Summary:
Depends on https://reviews.llvm.org/D59069.

https://bugs.llvm.org/show_bug.cgi?id=40979 describes a bug in which the
-coro-split pass would assert that a use was across a suspend point from
a definition. Normally this would mean that a value would "spill" across
a suspend point and thus need to be stored in the coroutine frame. However,
in this case the use was unreachable, and so it would not be necessary
to store the definition on the frame.

To prevent the assert, simply remove unreachable basic blocks from a
coroutine function before computing spills. This avoids the assert
reported in PR40979.

Reviewers: GorNishanov, tks2103

Reviewed By: GorNishanov

Subscribers: EricWF, jdoerfert, llvm-commits, lewissbaker

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59068

llvm-svn: 355852
2019-03-11 18:31:28 +00:00
Michael Trent 76d66123b2 Detect malformed LC_LINKER_COMMANDs in Mach-O binaries
Summary:
llvm-objdump can be tricked into reading beyond valid memory and
segfaulting if LC_LINKER_COMMAND strings are not null terminated. libObject
does have code to validate the integrity of the LC_LINKER_COMMAND struct,
but this validator improperly assumes linker command strings are null
terminated.

The solution is to report an error if a string extends beyond the end of
the LC_LINKER_COMMAND struct.

Reviewers: lhames, pete

Reviewed By: pete

Subscribers: rupprecht, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59179

llvm-svn: 355851
2019-03-11 18:29:25 +00:00
Simon Pilgrim 06ae025345 [X86] Extend widening comparison test.
Ensure we test both v2i16 unary and binary comparisons.

llvm-svn: 355849
2019-03-11 18:08:20 +00:00
Jeremy Morse 90ede5f4bf [SimplifyCFG] Retain debug info when threading jumps with critical edges
Fixes bug 38023: https://bugs.llvm.org/show_bug.cgi?id=38023

The SimplifyCFG pass will perform jump threading in some cases where
doing so is trivial and would simplify the CFG. When folding a series
of blocks with redundant conditional branches into an unconditional "critical
edge" block, it does not keep the debug location associated with the previous
conditional branch.

This patch fixes the bug described by copying the debug info from the
old conditional branch to the new unconditional branch instruction, and
adds a regression test for the SimplifyCFG pass that covers this case.

Patch by Stephen Tozer!

Differential Revision: https://reviews.llvm.org/D59206

llvm-svn: 355833
2019-03-11 16:23:59 +00:00
Petar Jovanovic 28e13eb098 [MIPS][microMIPS] Add a pattern to match TruncIntFP
A pattern needed to match TruncIntFP was missing. This was causing multiple
tests from llvm test suite to fail during compilation for micromips.

Patch by Mirko Brkusanin.

Differential Revision: https://reviews.llvm.org/D58722

llvm-svn: 355825
2019-03-11 14:13:31 +00:00
Sam Parker 52760bf435 [CGP] Limit distance between overflow math and cmp
Inserting an overflowing arithmetic intrinsic can increase register
pressure by producing two values at a point where only one is needed,
while the second use maybe several blocks away. This increase in
pressure is likely to be more detrimental on performance than
rematerialising one of the original instructions.
    
So, check that the arithmetic and compare instructions are no further
apart than their immediate successor/predecessor.

Differential Revision: https://reviews.llvm.org/D59024

llvm-svn: 355823
2019-03-11 13:19:46 +00:00
Jeremy Morse b60aea4131 [JumpThreading] Retain debug info when replacing branch instructions
Fixes bug 37966: https://bugs.llvm.org/show_bug.cgi?id=37966

The Jump Threading pass will replace certain conditional branch
instructions with unconditional branches when it can prove that only one
branch can occur. Prior to this patch, it would not carry the debug
info from the old instruction to the new one.

This patch fixes the bug described by copying the debug info from the
conditional branch instruction to the new unconditional branch
instruction, and adds a regression test for the Jump Threading pass that
covers this case.

Patch by Stephen Tozer!

Differential Revision: https://reviews.llvm.org/D58963

llvm-svn: 355822
2019-03-11 11:48:57 +00:00
George Rimar d8a5c6cf19 [llvm-objcopy] - Fix --compress-debug-sections when there are relocations.
When --compress-debug-sections is given,
llvm-objcopy removes the uncompressed sections and adds compressed to the section list.
This makes all the pointers to old sections to be outdated.

Currently, code already has logic for replacing the target sections of the relocation
sections. But we also have to update the relocations by themselves.

This fixes https://bugs.llvm.org/show_bug.cgi?id=40885.

Differential revision: https://reviews.llvm.org/D58960

llvm-svn: 355821
2019-03-11 11:01:24 +00:00
Petar Avramovic 5229f47f9f [MIPS GlobalISel] NarrowScalar G_UMULH
NarrowScalar G_UMULH in LegalizerHelper 
using multiplyRegisters helper function.
NarrowScalar G_UMULH for MIPS32.

Differential Revision: https://reviews.llvm.org/D58825

llvm-svn: 355815
2019-03-11 10:08:44 +00:00
Petar Avramovic 0b17e59b5c [MIPS GlobalISel] NarrowScalar G_MUL
Narrow Scalar G_MUL for MIPS32.
Revisit NarrowScalar implementation in LegalizerHelper.
Introduce new helper function multiplyRegisters.
It performs generic multiplication of values held in multiple registers.
Generated instructions use only types NarrowTy and i1.
Destination can be same or two times size of the source.

Differential Revision: https://reviews.llvm.org/D58824

llvm-svn: 355814
2019-03-11 10:00:17 +00:00
Craig Topper 00afa193f1 [X86] Enable sse2_cvtsd2ss intrinsic to use an EVEX encoded instruction.
llvm-svn: 355810
2019-03-11 06:01:04 +00:00
Amaury Sechet a5820cbd20 Add test case for add to sub post legalization. NFC
llvm-svn: 355797
2019-03-11 01:25:48 +00:00
Sanjay Patel 26e06e859e [x86] add x86-specific opcodes to extractelement scalarization list
llvm-svn: 355792
2019-03-10 18:56:21 +00:00
Craig Topper 93e15dfacc [X86] Make lowering of intrinsics with rounding mode stricter so that only valid rounding modes are lowered. Update tests accordingly
Many of our tests were not using valid rounding mode immediates. Clang verifies this in the frontend when it creates the intrinsics from builtins, but the backend would still lower invalid immediates.

With this change we will now leave them as intrinsics if the immediate is invalid. This will cause an isel selection failure.

llvm-svn: 355789
2019-03-10 17:20:45 +00:00
Nikita Popov bfec0d610c [AArch64] Add tests for saddsat/ssubsat; NFC
Signed versions of the existing unsigned tests.

llvm-svn: 355787
2019-03-10 12:21:36 +00:00
Nikita Popov 506c1aba4d [ARM] Use non-constant operand in umulo-32.ll; NFC
Currently the store+load is folded and both operands of the umulo
end up being constants. To avoid this getting folded away entirely,
make sure at least one operand is non-constant.

Also remove some allocas which don't seem relevant to the test.

llvm-svn: 355776
2019-03-09 13:43:21 +00:00
Nikita Popov 74dde7e5a1 [ARM] Generate test checks for umulo-32.ll; NFC
The second test case is going to be changed by D59041, so generate
full baseline checks.

llvm-svn: 355775
2019-03-09 13:21:15 +00:00
Alex Bradbury fea4957177 [RISCV] Support -target-abi at the MC layer and for codegen
This patch adds proper handling of -target-abi, as accepted by llvm-mc and
llc. Lowering (codegen) for the hard-float ABIs will follow in a subsequent
patch. However, this patch does add MC layer support for the hard float and
RVE ABIs (emission of the appropriate ELF flags
https://github.com/riscv/riscv-elf-psabi-doc/blob/master/riscv-elf.md#-file-header).

ABI parsing must be shared between codegen and the MC layer, so we add
computeTargetABI to RISCVUtils. A warning will be printed if an invalid or
unrecognized ABI is given.

Differential Revision: https://reviews.llvm.org/D59023

llvm-svn: 355771
2019-03-09 09:28:06 +00:00
Thomas Lively 972d7d514b [WebAssembly] Use named operands to identify loads and stores
Summary:
Uses the named operands tablegen feature to look up the indices of
offset, address, and p2align operands for all load and store
instructions. This replaces brittle, incorrect logic for identifying
loads and store when eliminating frame indices, which previously
crashed on bulk-memory ops. It also cleans up the SetP2Alignment pass.

Reviewers: aheejin, dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, jfb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59007

llvm-svn: 355770
2019-03-09 04:31:37 +00:00
Sanjay Patel 40bcc3de7d [x86] add tests for extract of FP select; NFC
llvm-svn: 355768
2019-03-09 02:11:05 +00:00
Craig Topper 69f8c1653d [ScalarizeMaskedMemIntrin] Use IRBuilder functions that take uint32_t/uint64_t for getelementptr, extractelement, and insertelement.
This saves needing to call getInt32 ourselves. Making the code a little shorter.

The test changes are because insert/extract use getInt64 internally. Shouldn't be a functional issue.

This cleanup because I plan to write similar code for expandload/compressstore.

llvm-svn: 355767
2019-03-09 02:08:41 +00:00
Ana Pazos 5254d1baae [RISCV] Allow access to FP CSRs without F extension
Summary:
Floating-point CSRs should be accessible even when F extension is not enabled.
But pseudo instructions that access floating point CSRs still require the F extension.
GNU tools already implement this behavior. RISC-V spec is pending update to reflect
this behavior and to extend it to pseudo instructions that access floating point CSRs.

Reviewers: asb

Reviewed By: asb

Subscribers: asb, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, llvm-commits

Differential Revision: https://reviews.llvm.org/D58932

llvm-svn: 355753
2019-03-08 23:01:08 +00:00
Rong Xu ce3be45cac [CodeGenPrepare] Fix ModifiedDT flag in optimizeSelectInst
r44412 fixed a huge compile time regression but it needed ModifiedDT flag to be
maintained correctly in optimizations in optimizeBlock() and optimizeInst().
Function optimizeSelectInst() does not update the flag.
This patch propagates the flag in optimizeSelectInst() back to
optimizeBlock().

This patch also removes ModifiedDT in CodeGenPrepare class (which is not used).
The property of ModifiedDT is now recorded in a ref parameter.

Differential Revision: https://reviews.llvm.org/D59139

llvm-svn: 355751
2019-03-08 22:46:18 +00:00
Mitch Phillips 53d3994719 [Go / ASAN] Disable Go bindings for ASAN tests.
Go binding tests fail under ASAN with the error at the bottom of this
commit message. The reason the buildbots are not currently always
failing on this test is that they selectively disable the bindings due
to a Go binary not being present on their system.

This change should allow users to build an asan-bootstrapped compiler
and run asan-ified unit tests locally, similar to the way that
sanitizer-* buildbots do.

The error is:
```
FAIL: LLVM :: Bindings/Go/go.test (7050 of 30112)
******************** TEST 'LLVM :: Bindings/Go/go.test' FAILED ********************
Script:
--
: 'RUN: at line 1';   /usr/local/google/home/mitchp/llvm-build/asan/sanitized-clang/bin/llvm-go go=/usr/lib/google-golang/bin/go test llvm.org/llvm/bindings/go/llvm
--
Exit Code: 1

Command Output (stdout):
--
FAIL	llvm.org/llvm/bindings/go/llvm [build failed]

--
Command Output (stderr):
--
ld.lld: error: undefined symbol: std::allocator<char>::allocator()
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(LLVMAddDataFlowSanitizerPass)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&)
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(LLVMAddDataFlowSanitizerPass)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(LLVMAddDataFlowSanitizerPass)

ld.lld: error: undefined symbol: std::allocator<char>::~allocator()
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(LLVMAddDataFlowSanitizerPass)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(LLVMAddDataFlowSanitizerPass)

ld.lld: error: undefined symbol: std::allocator<char>::~allocator()
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(LLVMAddDataFlowSanitizerPass)

ld.lld: error: undefined symbol: llvm::createDataFlowSanitizerPass(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, void* (*)(), void* (*)())
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(LLVMAddDataFlowSanitizerPass)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(void std::_Destroy<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*))

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&)
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(void __gnu_cxx::new_allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::construct<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&))

ld.lld: error: undefined symbol: std::__throw_length_error(char const*)
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::_M_check_len(unsigned long, char const*) const)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&)
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(void std::_Construct<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&))

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()
>>> referenced by InstrumentationBindings.cpp
>>>               $WORK/b048/_x018.o:(void __gnu_cxx::new_allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::destroy<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*))

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string()
>>> referenced by SupportBindings.cpp
>>>               $WORK/b048/_x019.o:(LLVMLoadLibraryPermanently2)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::size() const
>>> referenced by SupportBindings.cpp
>>>               $WORK/b048/_x019.o:(LLVMLoadLibraryPermanently2)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::c_str() const
>>> referenced by SupportBindings.cpp
>>>               $WORK/b048/_x019.o:(LLVMLoadLibraryPermanently2)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::size() const
>>> referenced by SupportBindings.cpp
>>>               $WORK/b048/_x019.o:(LLVMLoadLibraryPermanently2)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()
>>> referenced by SupportBindings.cpp
>>>               $WORK/b048/_x019.o:(LLVMLoadLibraryPermanently2)

ld.lld: error: undefined symbol: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()
>>> referenced by SupportBindings.cpp
>>>               $WORK/b048/_x019.o:(LLVMLoadLibraryPermanently2)

ld.lld: error: undefined symbol: llvm::sys::DynamicLibrary::getPermanentLibrary(char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)
>>> referenced by SupportBindings.cpp
>>>               $WORK/b048/_x019.o:(llvm::sys::DynamicLibrary::LoadLibraryPermanently(char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*))

ld.lld: error: undefined symbol: __asan_option_detect_stack_use_after_return
>>> referenced by MCJIT.cpp:45 (/usr/local/google/home/mitchp/llvm/llvm/lib/ExecutionEngine/MCJIT/MCJIT.cpp:45)
>>>               MCJIT.cpp.o:(llvm::MCJIT::createJIT(std::__1::unique_ptr<llvm::Module, std::__1::default_delete<llvm::Module> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*, std::__1::shared_ptr<llvm::MCJITMemoryManager>, std::__1::shared_ptr<llvm::LegacyJITSymbolResolver>, std::__1::unique_ptr<llvm::TargetMachine, std::__1::default_delete<llvm::TargetMachine> >)) in archive /usr/local/google/home/mitchp/llvm-build/asan/sanitized-clang/lib/libLLVMMCJIT.a

ld.lld: error: too many errors emitted, stopping now (use -error-limit=0 to see all errors)
clang-9: error: linker command failed with exit code 1 (use -v to see invocation)

--
```

llvm-svn: 355749
2019-03-08 22:34:33 +00:00
Amara Emerson 7a05d1c1f1 [AArch64][GlobalISel] Fix i1 arguments not being zero-extended as required by ABI.
Fixes PR41001.

llvm-svn: 355745
2019-03-08 22:17:00 +00:00
Sunil Srivastava ae8fe4e093 Improve "llvm-nm -f sysv" output for Elf files
Specifically, compute and Print Type and Section columns.

This is a re-commit of rL354833, after fixing the Asan problem found a a buildbot.

Differential Revision: https://reviews.llvm.org/D59060

llvm-svn: 355742
2019-03-08 22:00:50 +00:00
Sanjay Patel f84083b4db [x86] scalarize extract element 0 of FP cmp
An extension of D58282 noted in PR39665:
https://bugs.llvm.org/show_bug.cgi?id=39665

This doesn't answer the request to use movmsk, but that's an
independent problem. We need this and probably still need
scalarization of FP selects because we can't do that as a
target-independent transform (although it seems likely that
targets besides x86 should have this transform).

llvm-svn: 355741
2019-03-08 21:54:41 +00:00
Alexey Bataev a8b3eb46b5 [NVPTX][DEBUGINFO]Temp workaround for crash of ptxas: disable packed bytes in debug sections.
Summary:
This patch works around the bug in the ptxas tool with the processing of bytes
separated by the comma symbol. The emission of the packed string is
temporarily disabled.

Reviewers: tra

Subscribers: jholewinski, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59148

llvm-svn: 355740
2019-03-08 21:29:17 +00:00
Mitch Phillips 790edbc16e [HWASan] Save + print registers when tag mismatch occurs in AArch64.
Summary:
This change change the instrumentation to allow users to view the registers at the point at which tag mismatch occured. Most of the heavy lifting is done in the runtime library, where we save the registers to the stack and emit unwind information. This allows us to reduce the overhead, as very little additional work needs to be done in each __hwasan_check instance.

In this implementation, the fast path of __hwasan_check is unmodified. There are an additional 4 instructions (16B) emitted in the slow path in every __hwasan_check instance. This may increase binary size somewhat, but as most of the work is done in the runtime library, it's manageable.

The failure trace now contains a list of registers at the point of which the failure occured, in a format similar to that of Android's tombstones. It currently has the following format:

Registers where the failure occurred (pc 0x0055555561b4):
    x0  0000000000000014  x1  0000007ffffff6c0  x2  1100007ffffff6d0  x3  12000056ffffe025
    x4  0000007fff800000  x5  0000000000000014  x6  0000007fff800000  x7  0000000000000001
    x8  12000056ffffe020  x9  0200007700000000  x10 0200007700000000  x11 0000000000000000
    x12 0000007fffffdde0  x13 0000000000000000  x14 02b65b01f7a97490  x15 0000000000000000
    x16 0000007fb77376b8  x17 0000000000000012  x18 0000007fb7ed6000  x19 0000005555556078
    x20 0000007ffffff768  x21 0000007ffffff778  x22 0000000000000001  x23 0000000000000000
    x24 0000000000000000  x25 0000000000000000  x26 0000000000000000  x27 0000000000000000
    x28 0000000000000000  x29 0000007ffffff6f0  x30 00000055555561b4

... and prints after the dump of memory tags around the buggy address.

Every register is saved exactly as it was at the point where the tag mismatch occurs, with the exception of x16/x17. These registers are used in the tag mismatch calculation as scratch registers during __hwasan_check, and cannot be saved without affecting the fast path. As these registers are designated as scratch registers for linking, there should be no important information in them that could aid in debugging.

Reviewers: pcc, eugenis

Reviewed By: pcc, eugenis

Subscribers: srhines, kubamracek, mgorny, javed.absar, krytarowski, kristof.beyls, hiraditya, jdoerfert, llvm-commits, #sanitizers

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D58857

llvm-svn: 355738
2019-03-08 21:22:35 +00:00
Matt Arsenault e8c03a2511 AMDGPU: Move d16 load matching to preprocess step
When matching half of the build_vector to a load, there could still be
a hidden dependency on the other half of the build_vector the pattern
wouldn't detect. If there was an additional chain dependency on the
other value, a cycle could be introduced.

I don't think a tablegen pattern is capable of matching the necessary
conditions, so move this into PreprocessISelDAG. Check isPredecessorOf
for the other value to avoid a cycle. This has a warning that it's
expensive, so this should probably be moved into an MI pass eventually
that will have more freedom to reorder instructions to help match
this. That is currently complicated by the lack of a computeKnownBits
type mechanism for the selected function.

llvm-svn: 355731
2019-03-08 20:58:11 +00:00
Matt Arsenault 26e76ef0e2 DAG: Don't try to cluster loads with tied inputs
This avoids breaking possible value dependencies when sorting loads by
offset.

AMDGPU has some load instructions that write into the high or low bits
of the destination register, and have a tied input for the other input
bits. These can easily have the same base pointer, but be a swizzle so
the high address load needs to come first. This was inserting glue
forcing the opposite ordering, producing a cycle the InstrEmitter
would assert on. It may be potentially expensive to look for the
dependency between the other loads, so just skip any where this could
happen.

Fixes bug 40936 by reverting r351379, which added a hacky attempt to
fix this by adding chains in this case, which I think was just working
around broken glue before the InstrEmitter. The core of the patch is
re-implementing the fix for that problem.

llvm-svn: 355728
2019-03-08 20:46:15 +00:00
Sanjay Patel 43f098e719 [x86] add tests for extracted vector FP cmp; NFC
llvm-svn: 355727
2019-03-08 20:45:27 +00:00
Matt Arsenault 74c9c305e0 AMDGPU: Add more tests for d16 loads
Also fix a few cases that weren't testing what they were supposed to.

llvm-svn: 355724
2019-03-08 20:30:51 +00:00
Matt Arsenault 07f904befb AMDGPU: Correct DS implementation of areLoadsFromSameBasePtr
This was checking the wrong operands for the base register and the
offsets. The indexes are shifted by the number of output registers
from the machine instruction definition, and the chain is moved to the
end.

llvm-svn: 355722
2019-03-08 20:30:50 +00:00
Alexey Bataev 78fcb8381f [DEBUG_INFO][NVPTX]Emit empty .debug_loc section in presence of the debug option.
Summary:
If the LLVM module shows that it has debug info, but the file is
actually empty and the real debug info is not emitted, the ptxas tool
emits error 'Debug information not found in presence of .target debug'.
We need at leas one empty debug section to silence this message. Section
`.debug_loc` is not emitted for PTX and we can emit empty `.debug_loc`
section if `debug` option was emitted.

Reviewers: tra

Subscribers: jholewinski, aprantl, llvm-commits

Differential Revision: https://reviews.llvm.org/D57250

llvm-svn: 355719
2019-03-08 20:08:04 +00:00
Amaury Sechet 782ac933b5 [DAGCombiner] fold (add (add (xor a, -1), b), 1) -> (sub b, a)
Summary: This pattern is sometime created after legalization.

Reviewers: efriedma, spatel, RKSimon, zvi, bkramer

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58874

llvm-svn: 355716
2019-03-08 19:39:32 +00:00
Sanjay Patel b22f438df3 [x86] prevent infinite looping from inverse shuffle transforms
llvm-svn: 355713
2019-03-08 19:20:28 +00:00
Simon Pilgrim 53652feab7 [X86] Add test case for PR22473
llvm-svn: 355712
2019-03-08 19:16:26 +00:00
Diogo N. Sampaio c20c37ba7f [ARM][FIX] Fix vfmal.f16 and vfmsl.f16 operand
The indexed variant of vfmal.f16 and vfmsl.f16
instructions use the uppser bits of the indexed
operand to store the index (1 bit for the double
variant, 2 bits for the quad).

This limits the usable registers to d0 - d7 or
s0 - s15. This patch enforces this limitation.

Differential Revision: https://reviews.llvm.org/D59021

llvm-svn: 355707
2019-03-08 17:11:20 +00:00
Simon Pilgrim 00ab0339ed Fix typo in constant vector
llvm-svn: 355699
2019-03-08 15:17:26 +00:00
James Henderson b41130bedc [llvm-readelf]Don't lose negative-ness of negative addends for no symbol relocations
llvm-readelf prints relocation addends as:

  <symbol value>[+-]<absolute addend>

where [+-] is determined from whether addend is less than zero or not.
However, it does not print the +/- if there is no symbol, which meant
that negative addends became their positive value with no indication
that this had happened. This patch stops the absolute conversion when
addends are negative and there is no associated symbol.

Reviewed by: Higuoxing, mattd, MaskRay

Differential Revision: https://reviews.llvm.org/D59095

llvm-svn: 355696
2019-03-08 13:22:05 +00:00
Clement Courbet 8e16d73346 [SelectionDAG] Allow the user to specify a memeq function.
Summary:
Right now, when we encounter a string equality check,
e.g. `if (memcmp(a, b, s) == 0)`, we try to expand to a comparison if `s` is a
small compile-time constant, and fall back on calling `memcmp()` else.

This is sub-optimal because memcmp has to compute much more than
equality.

This patch replaces `memcmp(a, b, s) == 0` by `bcmp(a, b, s) == 0` on platforms
that support `bcmp`.

`bcmp` can be made much more efficient than `memcmp` because equality
compare is trivially parallel while lexicographic ordering has a chain
dependency.

Subscribers: fedor.sergeev, jyknight, ckennelly, gchatelet, llvm-commits

Differential Revision: https://reviews.llvm.org/D56593

llvm-svn: 355672
2019-03-08 09:07:45 +00:00
Carl Ritson 1a98dc1840 [AMDGPU] V_CVT_F32_UBYTE{0,1,2,3} are full rate instructions
Summary: Fix a bug in the scheduling model where V_CVT_F32_UBYTE{0,1,2,3} are incorrectly marked as quarter rate instructions.

Reviewers: arsenm, rampitec

Reviewed By: rampitec

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59091

llvm-svn: 355671
2019-03-08 09:03:11 +00:00
Craig Topper 4505c99e72 [X86] Improve the type checking in isLegalMaskedLoad and isLegalMaskedGather.
We were just checking pointer size and type primitive size. But this caused unintended things like vectors of half being accepted by masked load/store.

For FP we now explicitly check for only double and float.

For pointers we now let any pointer through. Trusting that only 32 and 64 would be used to generate assembly.

We only check bitwidth after checking that the type is an integer.

llvm-svn: 355667
2019-03-08 07:33:43 +00:00
Steven Wu ed98229286 [Bitcode] Fix bitcode compatibility issue with clang.arc.use intrinsic
Summary:
In r349534, objc arc implementation is switched to use intrinsics and at
the same time, clang.arc.use is renamed to llvm.objc.clang.arc.use to
make the naming more consistent. The side-effect of that is llvm no
longer recognize it as intrinsics and codegen external references to
it instead.

Rather than upgrade the old intrinsics name to the new one and wait for
the arc-contract pass to remove it, simply remove it in the bitcode
upgrader.

rdar://problem/48607063

Reviewers: pete, ahatanak, erik.pilkington, dexonsmith

Reviewed By: pete, dexonsmith

Subscribers: jkorous, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59112

llvm-svn: 355663
2019-03-08 05:27:53 +00:00
Sanjay Patel 5ed14ef1e4 [x86] add extract FP tests for target-specific nodes; NFC
llvm-svn: 355655
2019-03-07 23:55:54 +00:00
Craig Topper d0c2dba644 [X86] Correct scheduler information for rotate by constant for Haswell, Broadwell, and Skylake.
Rotate with explicit immediate is a single uop from Haswell on. An immediate of 1 has a dependency on the previous writer of flags, but the other immediate values do not.

The implicit rotate by 1 instruction is 2 uops. But the flags are merged after the rotate uop so the data result does not see the flag dependency. But I don't think we have any way of modeling that.

RORX is 1 uop without the load. 2 uops with the load. We currently model these with WriteShift/WriteShiftLd.

Differential Revision: https://reviews.llvm.org/D59077

llvm-svn: 355636
2019-03-07 21:22:56 +00:00
Craig Topper b3af5d3e57 [X86] Model ADC/SBB with immediate 0 more accurately in the Haswell scheduler model
Haswell and possibly Sandybridge have an optimization for ADC/SBB with immediate 0 to use a single uop flow. This only applies GR16/GR32/GR64 with an 8-bit immediate. It does not apply to GR8. It also does not apply to the implicit AX/EAX/RAX forms.

Differential Revision: https://reviews.llvm.org/D59058

llvm-svn: 355635
2019-03-07 21:22:51 +00:00
Konstantin Zhuravlyov 47f0bf8f1f AMDHSA: Code object v3 updates
- Copy kernel symbol attributes into kernel descriptor attributes
  - Make sure kernel symbol's visibility is not "higher" than protected

Differential Revision: https://reviews.llvm.org/D59057

llvm-svn: 355630
2019-03-07 19:58:29 +00:00
Matt Davis 6c5a49ccb9 [llvm-mca] Emit a message when no bottlenecks are identified.
Summary:
Since bottleneck hints are enabled via user request, it can be
confusing if no bottleneck information is presented.  Such is the
case when no bottlenecks are identified.  This patch emits a message
in that case.

Reviewers: andreadb

Reviewed By: andreadb

Subscribers: tschuett, gbedwell, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59098

llvm-svn: 355628
2019-03-07 19:34:44 +00:00
Vlad Tsyrklevich 2e1479e2f2 Delete x86_64 ShadowCallStack support
Summary:
ShadowCallStack on x86_64 suffered from the same racy security issues as
Return Flow Guard and had performance overhead as high as 13% depending
on the benchmark. x86_64 ShadowCallStack was always an experimental
feature and never shipped a runtime required to support it, as such
there are no expected downstream users.

Reviewers: pcc

Reviewed By: pcc

Subscribers: mgorny, javed.absar, hiraditya, jdoerfert, cfe-commits, #sanitizers, llvm-commits

Tags: #clang, #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D59034

llvm-svn: 355624
2019-03-07 18:56:36 +00:00
Florian Hahn 6ca0985aa5 [InterleavedAccessAnalysis] Fix integer overflow in insertMember.
Without checking for integer overflow, invalid members can be added
 e.g. if the calculated key overflows, becomes positive and the largest key.

This fixes
      https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=7560
      https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=13128
      https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=13229

Reviewers: Ayal, anna, hsaito, efriedma

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D55538

llvm-svn: 355613
2019-03-07 17:50:16 +00:00
Petar Jovanovic 95817d3641 [DebugInfo] Fix the type of the formated variable
Change the format type of *Personality and *LSDAAddress to PRIx64 since
they are of type uint64_t.
The problem was detected on mips builds, where it was printing junk values
and causing test failure.

Patch by Milos Stojanovic.

Differential Revision: https://reviews.llvm.org/D58451

llvm-svn: 355607
2019-03-07 16:31:08 +00:00
Xing GUO eee6226c21 [llvm-readobj] Dump DT_USED value as string like GNU readelf does
Reviewers: jhenderson

Reviewed By: jhenderson

Subscribers: rupprecht, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59089

llvm-svn: 355600
2019-03-07 14:53:10 +00:00
David Green ffc922ec35 [LSR] Attempt to increase the accuracy of LSR's setup cost
In some loops, we end up generating loop induction variables that look like:
  {(-1 * (zext i16 (%i0 * %i1) to i32))<nsw>,+,1}
As opposed to the simpler:
  {(zext i16 (%i0 * %i1) to i32),+,-1}
i.e we count up from -limit to 0, not the simpler counting down from limit to
0. This is because the scores, as LSR calculates them, are the same and the
second is filtered in place of the first. We end up with a redundant SUB from 0
in the code.

This patch tries to make the calculation of the setup cost a little more
thoroughly, recursing into the scev members to better approximate the setup
required. The cost function for comparing LSR costs is:

return std::tie(C1.NumRegs, C1.AddRecCost, C1.NumIVMuls, C1.NumBaseAdds,
                C1.ScaleCost, C1.ImmCost, C1.SetupCost) <
       std::tie(C2.NumRegs, C2.AddRecCost, C2.NumIVMuls, C2.NumBaseAdds,
                C2.ScaleCost, C2.ImmCost, C2.SetupCost);
So this will only alter results if none of the other variables turn out to be
different.

Differential Revision: https://reviews.llvm.org/D58770

llvm-svn: 355597
2019-03-07 13:44:40 +00:00
Petar Avramovic 3d3120dc9a [MIPS GlobalISel] Fix mul operands
Unsigned mul high for MIPS32 is selected into two PseudoInstructions:
PseudoMULTu and PseudoMFHI that use accumulator register class ACC64 for
some of its operands. Registers in this class have appropriate hi and lo
register as subregisters: $lo0 and $hi0 are subregisters of $ac0 etc.
mul instruction implicit-defs $lo0 and $hi0 according to MipsInstrInfo.td.
In functions where mul and PseudoMULTu are present fastRegisterAllocator
will "run out of registers during register allocation" because
'calcSpillCost' for $ac0 will return spillImpossible because subregisters
$lo0 and $hi0 of $ac0 are reserved by mul instruction above. A solution is
to mark implicit-defs of $lo0 and $hi0 as dead in mul instruction.

Differential Revision: https://reviews.llvm.org/D58715

llvm-svn: 355594
2019-03-07 13:28:29 +00:00
George Rimar a5a0a0f049 [yaml2obj] - Allow producing ELFDATANONE ELFs
I need this to remove a binary from LLD test suite.
The patch also simplifies the code a bit.

Differential revision: https://reviews.llvm.org/D59082

llvm-svn: 355591
2019-03-07 12:09:19 +00:00
Craig Topper 3acc4236b8 [X86] Enable combineFMinNumFMaxNum for 512 bit vectors when AVX512 is enabled.
Simplified by just checking if the vector type is legal rather than listing all combinations of types and features.

Fixes PR40984.

llvm-svn: 355582
2019-03-07 06:30:19 +00:00
Craig Topper a0dd6e9a08 [X86] Add 512-bit fminnum/maxnum test cases for PR40984. Also add v8f32 minnum/maxnum tests. NFC
llvm-svn: 355581
2019-03-07 05:56:52 +00:00
Aakanksha Patil c56d2afc63 AMDGPU: Handle "uniform-work-group-size" attribute (fix for RADV)
A previous patch for "uniform-work-group-size" attribute was found to break
some RADV and possibly radeon SI tests and had to be retracted.
This patch fixes that.

Differential Revision: http://reviews.llvm.org/D58993

llvm-svn: 355574
2019-03-07 00:54:04 +00:00
Nick Desaulniers 212c8ac23f [LoopRotate] fix crash encountered with callbr
Summary:
While implementing inlining support for callbr
(https://bugs.llvm.org/show_bug.cgi?id=40722), I hit a crash in Loop
Rotation when trying to build the entire x86 Linux kernel
(drivers/char/random.c). This is a small fix up to r353563.

Test case is drivers/char/random.c (with callbr's inlined), then ran
through creduce, then `opt -opt-bisect-limit=<limit>`, then bugpoint.

Thanks to Craig Topper for immediately spotting the fix, and teaching me
how to fish.

Reviewers: craig.topper, jyknight

Reviewed By: craig.topper

Subscribers: hiraditya, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58929

llvm-svn: 355564
2019-03-06 23:04:40 +00:00
Simon Atanasyan 83b88441ad [mips] Replace assertion by error message while lowering `RETURNADDR` and `FRAMEADDR`
MIPS target supports lowering `RETURNADDR` and `FRAMEADDR` for a current
frame only. It's better to show an error message then crash on assertion
if `__builtin_return_address` is invoked with non-zero argument.

llvm-svn: 355558
2019-03-06 22:40:28 +00:00
Rong Xu 3ee1524afc [PGO] Fix hexagon buildbot errors in r355541
Add "REQUIRES: x86-registered-target" to thinlto test cases.

llvm-svn: 355556
2019-03-06 22:16:47 +00:00
Abderrazek Zaafrani 5ced596198 [AArch64] Improve FP16 instruction selection for vector round and vector conver from half instructions
https://reviews.llvm.org/D58855

llvm-svn: 355545
2019-03-06 20:30:06 +00:00
Nikita Popov e1012e1efb [X86] Add vector mulo with power of two operand tests; NFC
llvm-svn: 355544
2019-03-06 20:25:49 +00:00
Paul Robinson 05efe0fdc4 [PS4] Emit a trap after a stack-protector fail call.
llvm-svn: 355542
2019-03-06 19:57:43 +00:00
Rong Xu 05c0afe842 [PGO] Context sensitive PGO (part 4)
Part 4 of CSPGO changes:
(1) add support in cmake for cspgo build.
(2) fix an issue in big endian.
(3) test cases.

Differential Revision: https://reviews.llvm.org/D54175

llvm-svn: 355541
2019-03-06 19:31:37 +00:00