Commit Graph

121107 Commits

Author SHA1 Message Date
Simon Pilgrim 37a63a748e Use SDValue::getConstantOperandAPInt helper where possible. NFCI.
llvm-svn: 355267
2019-03-02 11:11:22 +00:00
Xing GUO b285878907 [llvm-objdump] Should print unknown d_tag in hex format
Summary:
Currently, `llvm-objdump` prints "unknown" instead of d_tag value in hex format. Because getDynamicTagAsString returns "unknown" rather than empty 
string.

Reviewers: grimar, jhenderson

Reviewed By: jhenderson

Subscribers: rupprecht, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58763

llvm-svn: 355262
2019-03-02 04:20:28 +00:00
Thomas Lively 43876ae7bc [WebAssembly] Expand operations not supported by SIMD
Summary:
This prevents crashes in instruction selection when these operations
are used. The tests check that the scalar version of the instruction
is used where applicable, although some expansions do not use the
scalar version.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58859

llvm-svn: 355261
2019-03-02 03:32:25 +00:00
Amaury Sechet f24abf6511 [X86] Improve use of SHLD/SHRD
Summary:
This extends the variety of pattern that can generate a SHLD instead of using two shifts.

This fixes a regression that would be introduced by D57367 or D33587

Reviewers: RKSimon, craig.topper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D57389

llvm-svn: 355260
2019-03-02 02:44:16 +00:00
Florian Hahn 98f11a7d75 [SCEV] Handle case where MaxBECount is less precise than ExactBECount for OR.
In some cases, MaxBECount can be less precise than ExactBECount for AND
and OR (the AND case was PR26207). In the OR test case, both ExactBECounts are
undef, but MaxBECount are different, so we hit the assertion below. This
patch uses the same solution the AND case already uses.

Assertion failed:
   ((isa<SCEVCouldNotCompute>(ExactNotTaken) || !isa<SCEVCouldNotCompute>(MaxNotTaken))
     && "Exact is not allowed to be less precise than Max"), function ExitLimit

This patch also consolidates test cases for both AND and OR in a single
test case.

Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=13245

Reviewers: sanjoy, efriedma, mkazantsev

Reviewed By: sanjoy

Differential Revision: https://reviews.llvm.org/D58853

llvm-svn: 355259
2019-03-02 02:31:44 +00:00
Florian Hahn 3c7e92b5d6 [SCEV] Remove undef check for SCEVConstant (NFC)
The value stored in SCEVConstant is of type ConstantInt*, which can
never be UndefValue. So we should never hit that code.

Reviewers: mkazantsev, sanjoy

Reviewed By: sanjoy

Differential Revision: https://reviews.llvm.org/D58851

llvm-svn: 355257
2019-03-02 01:57:28 +00:00
Vlad Tsyrklevich 53a9f1d367 Revert "[DWARFFormValue] Cleanup DWARFFormValue interface. (2/2) (NFC)"
This reverts commit r355233, it was causing UBSan failures.

llvm-svn: 355255
2019-03-02 01:10:00 +00:00
Thomas Lively 7ec62fde78 Revert "[WebAssembly][WIP] Expand operations not supported by SIMD"
This was accidentally committed without tests or review.

llvm-svn: 355254
2019-03-02 00:55:16 +00:00
Mandeep Singh Grang a14f20c5b3 [ProfileData] Sort FuncData before iteration to remove non-determinism
Reviewers: rsmith, bogner, dblaikie

Reviewed By: dblaikie

Subscribers: Hahnfeld, jdoerfert, vsk, dblaikie, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D57986

llvm-svn: 355252
2019-03-02 00:47:43 +00:00
Thomas Lively f5a8c28e7e [WebAssembly][WIP] Expand operations not supported by SIMD
llvm-svn: 355247
2019-03-02 00:18:07 +00:00
Daniel Sanders c365cee658 [tblgen] Track CodeInit origins when possible
Summary:
Add an SMLoc to CodeInit that records the source line it originated from.
This allows tablegen to point precisely at portions of code when reporting
errors within the CodeInit. For example, in the upcoming GlobalISel
combiner, it can report undefined expansions and point at the instance of
the expansion. This is achieved using something like:
  SMLoc::getFromPointer(SMLoc::getPointer() +
                        (StringRef - CodeInit::getValue()))

The location is lost when producing a CodeInit by string concatenation so
a fallback SMLoc is required (e.g. the Record::getLoc()) but that's pretty
rare for CodeInits.

There's a reasonable case for extending tracking of a couple other Init
objects, for example StringInit's are often parsed and it would be good to
point inside the string when reporting errors about that. However, location
tracking also harms de-duplication. This is fine for CodeInit where there's
only a few hundred of them (~160 for X86) and it may be worth it for
StringInit (~86k up to ~1.9M for roughly 15MB increase for X86).
However the origin tracking would be a _terrible_ idea for IntInit, BitInit,
and UnsetInit. I haven't measured either of those three but BitInit would
most likely be on the order of increasing the current 2 BitInit values up
to billions.

Reviewers: volkan, aditya_nandakumar, bogner, paquette, aemerson

Reviewed By: paquette

Subscribers: javed.absar, kristof.beyls, dexonsmith, llvm-commits, kristina

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58141

llvm-svn: 355245
2019-03-02 00:12:57 +00:00
Paul Robinson d8632c92a7 Try to fix Windows bots after r355226.
Windows has two path separator characters.

llvm-svn: 355235
2019-03-01 22:28:13 +00:00
Jonas Devlieghere 2dc2baa8cc [DWARFFormValue] Cleanup DWARFFormValue interface. (2/2) (NFC)
Continues the work started in r354941. Changes (all but one) uses of the
extractValue to static createFromData.

llvm-svn: 355233
2019-03-01 22:14:24 +00:00
Paul Robinson 1ca25763f0 [DWARF] Make -g with empty assembler source work better.
This was sometimes causing clang or llvm-mc to crash, and in other
cases could emit a bogus DWARF line-table header. I did an interim
patch in r352541; this patch should be a cleaner and more complete
fix, and retains the test.

Addresses PR40538.

Differential Revision: https://reviews.llvm.org/D58750

llvm-svn: 355226
2019-03-01 20:58:04 +00:00
Craig Topper 4cfc39179e [TableGen][SelectionDAG][X86] Add specific isel matchers for immAllZerosV/immAllOnesV. Remove bitcasts from X86 patterns that are no longer necessary.
Previously we had build_vector PatFrags that called ISD::isBuildVectorAllZeros/Ones. Internally the ISD::isBuildVectorAllZeros/Ones look through bitcasts, but we aren't able to take advantage of that in isel. Instead of we have to canonicalize the types of the all zeros/ones build_vectors and insert bitcasts. Then we have to pattern match those exact bitcasts.

By emitting specific matchers for these 2 nodes, we can make isel look through any bitcasts without needing to explicitly match them. We should also be able to remove the canonicalization to vXi32 from lowering, but I've left that for a follow up.

This removes something like 40,000 bytes from the X86 isel table.

Differential Revision: https://reviews.llvm.org/D58595

llvm-svn: 355224
2019-03-01 20:18:38 +00:00
Nikita Popov ed3ca9272f [ValueTracking] Known bits support for unsigned saturating add/sub
We have two sources of known bits:

1. For adds leading ones of either operand are preserved. For sub
leading zeros of LHS and leading ones of RHS become leading zeros in
the result.

2. The saturating math is a select between add/sub and an all-ones/
zero value. As such we can carry out the add/sub known bits
calculation, and only preseve the known one/zero bits respectively.

Differential Revision: https://reviews.llvm.org/D58329

llvm-svn: 355223
2019-03-01 20:07:04 +00:00
Philip Reames cf0a978e1f [InstCombine] Extend saturating idempotent atomicrmw transform to FP
I'm assuming that the nan propogation logic for InstructonSimplify's handling of fadd and fsub is correct, and applying the same to atomicrmw.

Differential Revision: https://reviews.llvm.org/D58836

llvm-svn: 355222
2019-03-01 19:50:36 +00:00
Sanjay Patel 6e1e7e1c3e [InstCombine] move add after umin/umax
In the motivating cases from PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
...moving the add enables us to narrow the
min/max which eliminates zext/trunc which
enables signficantly better vectorization.
But that bug is still not completely fixed.

https://rise4fun.com/Alive/5KQ

  Name: umax
  Pre: C1 u>= C0
  %a = add nuw i8 %x, C0
  %cond = icmp ugt i8 %a, C1
  %r = select i1 %cond, i8 %a, i8 C1
  =>
  %c2 = icmp ugt i8 %x, C1-C0
  %u2 = select i1 %c2, i8 %x, i8 C1-C0
  %r = add nuw i8 %u2, C0

  Name: umin
  Pre: C1 u>= C0
  %a = add nuw i32 %x, C0
  %cond = icmp ult i32 %a, C1
  %r = select i1 %cond, i32 %a, i32 C1
  =>
  %c2 = icmp ult i32 %x, C1-C0
  %u2 = select i1 %c2, i32 %x, i32 C1-C0
  %r = add nuw i32 %u2, C0

llvm-svn: 355221
2019-03-01 19:42:40 +00:00
Vlad Tsyrklevich 8925138007 Revert "[MIPS GlobalISel] Fix mul operands"
This reverts commit r355178, it is causing ASan failures on the
sanitizer bots.

llvm-svn: 355219
2019-03-01 18:58:22 +00:00
Philip Reames 2226e9a745 [LICM] Infer proper alignment from loads during scalar promotion
This patch fixes an issue where we would compute an unnecessarily small alignment during scalar promotion when no store is not to be guaranteed to execute, but we've proven load speculation safety. Since speculating a load requires proving the existing alignment is valid at the new location (see Loads.cpp), we can use the alignment fact from the load.

For non-atomics, this is a performance problem. For atomics, this is a correctness issue, though an *incredibly* rare one to see in practice. For atomics, we might not be able to lower an improperly aligned load or store (i.e. i32 align 1). If such an instruction makes it all the way to codegen, we *may* fail to codegen the operation, or we may simply generate a slow call to a library function. The part that makes this super hard to see in practice is that the memory location actually *is* well aligned, and instcombine knows that. So, to see a failure, you have to have a) hit the bug in LICM, b) somehow hit a depth limit in InstCombine/ValueTracking to avoid fixing the alignment, and c) then have generated an instruction which fails codegen rather than simply emitting a slow libcall. All around, pretty hard to hit.

Differential Revision: https://reviews.llvm.org/D58809

llvm-svn: 355217
2019-03-01 18:45:05 +00:00
Philip Reames 77982868c5 [InstCombine] Extend "idempotent" atomicrmw optimizations to floating point
An idempotent atomicrmw is one that does not change memory in the process of execution.  We have already added handling for the various integer operations; this patch extends the same handling to floating point operations which were recently added to IR.

Note: At the moment, we canonicalize idempotent fsub to fadd when ordering requirements prevent us from using a load.  As discussed in the review, I will be replacing this with canonicalizing both floating point ops to integer ops in the near future.

Differential Revision: https://reviews.llvm.org/D58251

llvm-svn: 355210
2019-03-01 18:00:07 +00:00
Thomas Lively d295f51469 Revert "[WebAssembly] Lower SIMD shifts since they are fixed in V8"
They weren't fixed in V8. Oops.

llvm-svn: 355208
2019-03-01 17:43:55 +00:00
Jonas Hahnfeld e071cd86df Hide two unused debugging methods, NFCI.
GCC correctly moans that PlainCFGBuilder::isExternalDef(llvm::Value*) and
StackSafetyDataFlowAnalysis::verifyFixedPoint() are defined but not used
in Release builds. Hide them behind 'ifndef NDEBUG'.

llvm-svn: 355205
2019-03-01 17:15:21 +00:00
Manman Ren 576124a319 Try to fix NetBSD buildbot breakage introduced in D57463.
By including the header file in the source.

llvm-svn: 355202
2019-03-01 15:25:24 +00:00
Oliver Stannard 82fbbc21fd [ARM] Fix FP16 stack loads/stores for Thumb2 with frame pointer
The new addressing mode added for the v8.2A FP16 instructions uses bit 8 of the
immediate to encode the sign of the offset, like the other FP loads/stores, so
need to be treated the same way.

Differential revision: https://reviews.llvm.org/D58816

llvm-svn: 355201
2019-03-01 14:20:28 +00:00
Oliver Stannard e019e6223b [ARM] Consider undefined-on-NaN conditions in checkVSELConstraints
This function was not checking for the condition code variants which are
undefined if either input is NaN, so we were missing selection of the VSEL
instruction in some cases when using -fno-honor-nans or -ffast-math.

Differential revision: https://reviews.llvm.org/D58812

llvm-svn: 355199
2019-03-01 13:58:25 +00:00
George Rimar a7ba1a0f81 [yaml2obj] - Allow setting custom sh_info for RawContentSection sections.
This is for tweaking SHT_SYMTAB sections.
Their sh_info contains the (number of symbols + 1) usually.
But for creating invalid inputs for test cases it would be convenient
to allow explicitly override this field from YAML.

Differential revision: https://reviews.llvm.org/D58779

llvm-svn: 355193
2019-03-01 10:18:16 +00:00
Diana Picus 54829ec5d0 [ARM GlobalISel] Support G_CTLZ for Thumb2
Same as ARM mode but with different opcode.

llvm-svn: 355191
2019-03-01 10:12:28 +00:00
Nicola Zaghen a896756955 [Tablegen] Add support for the !mul operator.
This is a small addition to arithmetic operations that improves
expressiveness of the language.

Differential Revision: https://reviews.llvm.org/D58775

llvm-svn: 355187
2019-03-01 09:46:29 +00:00
Igor Kudrin a38432cefb [CommandLine] Allow grouping options which can have values.
This patch allows all forms of values for options to be used at the end
of a group. With the fix, it is possible to follow the way GNU binutils
tools handle grouping options better. For example, the -j option can be
used with objdump in any of the following ways:

$ objdump -d -j .text a.o
$ objdump -d -j.text a.o
$ objdump -dj .text a.o
$ objdump -dj.text a.o

Differential Revision: https://reviews.llvm.org/D58711

llvm-svn: 355185
2019-03-01 09:22:42 +00:00
Igor Kudrin 875f05828d [CommandLine] Do not crash if an option has both ValueRequired and Grouping.
If an option, which requires a value, has a `cl::Grouping` formatting
modifier, it works well as far as it is used at the end of a group,
or as a separate argument. However, if the option appears accidentally
in the middle of a group, the program just crashes. This patch prints
an error message instead.

Differential Revision: https://reviews.llvm.org/D58499

llvm-svn: 355184
2019-03-01 09:20:56 +00:00
Stanislav Mekhanoshin bb98841399 [AMDGPU] Mark ds instructions as meybeAtomic
These were not recognized as potential atomics by memory legalizer.
The test was working not because legalizer did a right thing, but
because it has skipped all these instructions. When I have fixed
DS desciption test started to fail because region address has
changed from 4 to 2 a while ago.

Differential Revision: https://reviews.llvm.org/D58802

llvm-svn: 355179
2019-03-01 07:59:17 +00:00
Petar Avramovic 9bf43b5c26 [MIPS GlobalISel] Fix mul operands
Unsigned mul high for MIPS32 is selected into two PseudoInstructions:
PseudoMULTu and PseudoMFHI that use accumulator register class ACC64 for
some of its operands. Registers in this class have appropriate hi and lo
register as subregisters: $lo0 and $hi0 are subregisters of $ac0 etc.
mul instruction implicit-defs $lo0 and $hi0 according to MipsInstrInfo.td.
In functions where mul and PseudoMULTu are present fastRegisterAllocator
will "run out of registers during register allocation" because
'calcSpillCost' for $ac0 will return spillImpossible because subregisters
$lo0 and $hi0 of $ac0 are reserved by mul instruction above. A solution is
to mark implicit-defs of $lo0 and $hi0 as dead in mul instruction.

Differential Revision: https://reviews.llvm.org/D58715

llvm-svn: 355178
2019-03-01 07:35:57 +00:00
Petar Avramovic a48285a190 [MIPS GlobalISel] Select G_UMULH
Legalize G_UMULO and select G_UMULH for MIPS32.

Differential Revision: https://reviews.llvm.org/D58714

llvm-svn: 355177
2019-03-01 07:25:44 +00:00
Fangrui Song f4b25f700a [ConstantHoisting] Call cleanup() in ConstantHoistingPass::runImpl to avoid dangling elements in ConstIntInfoVec for new PM
Summary:
ConstIntInfoVec contains elements extracted from the previous function.
In new PM, releaseMemory() is not called and the dangling elements can
cause segfault in findConstantInsertionPoint.

Rename releaseMemory() to cleanup() to deliver the idea that it is
mandatory and call cleanup() in ConstantHoistingPass::runImpl to fix
this.

Reviewers: ormris, zzheng, dmgreen, wmi

Reviewed By: ormris, wmi

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58589

llvm-svn: 355174
2019-03-01 05:27:01 +00:00
Craig Topper 4f61308af2 [Subtarget] Remove static global constructor call from the tablegened subtarget feature tables
Subtarget features are stored in a std::bitset that has been subclassed. There is a special constructor to allow the tablegen files to provide a list of bits to initialize the std::bitset to. This constructor isn't constexpr and std::bitset doesn't support many constexpr operations either. This results in a static global constructor being used to initialize the feature bitsets in these files at startup.

To fix this I've introduced a new FeatureBitArray class that holds three 64-bit values representing the initial bit values and taught tablegen to emit hex constants for them based on the feature enum values. This makes the tablegen files less readable than they were before. I can add the list of features back as a comment if we think that's important.

I've added a method to convert from this class into the std::bitset subclass we had before. I considered making the new FeatureBitArray class just implement the std::bitset interface we need instead, but thought I'd see how others felts about that first.

I've simplified the interfaces to SetImpliedBits and ClearImpliedBits a little minimize the number of times we need to convert to the bitset.

This removes about 27K from my local release+asserts build of llc.

Differential Revision: https://reviews.llvm.org/D58520

llvm-svn: 355167
2019-03-01 02:19:26 +00:00
Thomas Lively c4b674955c [WebAssembly] Lower SIMD shifts since they are fixed in V8
Reviewers: sbc100

Subscribers: dschuff, jgravelle-google, hiraditya, aheejin, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58800

llvm-svn: 355163
2019-03-01 01:38:54 +00:00
Tom Stellard 33634d1b25 AMDGPU/GlobalISel: Implement select for G_INSERT
Re-commit r344310.

Reviewers: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D53116

llvm-svn: 355159
2019-03-01 00:50:26 +00:00
Thomas Lively ae79f42a2f [WebAssembly] Fix crash when @llvm.global_dtors is external
Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58799

llvm-svn: 355157
2019-03-01 00:12:13 +00:00
Tom Stellard 41f32196a0 AMDGPU/GlobalISel: Implement select for G_EXTRACT
Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D49714

llvm-svn: 355156
2019-02-28 23:37:48 +00:00
Joerg Sonnenberger 01530291ea [PPC] Secure PLT only has meaning for PIC
llvm-svn: 355154
2019-02-28 23:33:09 +00:00
Reid Kleckner 701593f1db [sancov] Instrument reachable blocks that end in unreachable
Summary:
These sorts of blocks often contain calls to noreturn functions, like
longjmp, throw, or trap. If they don't end the program, they are
"interesting" from the perspective of sanitizer coverage, so we should
instrument them. This was discussed in https://reviews.llvm.org/D57982.

Reviewers: kcc, vitalybuka

Subscribers: llvm-commits, craig.topper, efriedma, morehouse, hiraditya

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58740

llvm-svn: 355152
2019-02-28 22:54:30 +00:00
Adrian Prantl fa37a00044 dsymutil support for DW_OP_convert
Add support for cloning DWARF expressions that contain base type DIE
references in dsymutil.

<rdar://problem/48167812>

Differential Revision: https://reviews.llvm.org/D58534

llvm-svn: 355148
2019-02-28 22:12:32 +00:00
Eli Friedman d19a7060c6 [AArch64] [Windows] Don't skip constructing UnwindHelp.
In certain cases, the first non-frame-setup instruction in a function is
a branch.  For example, it could be a cbz on an argument.  Make sure we
correctly allocate the UnwindHelp, and find an appropriate register to
use to initialize it.

Fixes https://bugs.llvm.org/show_bug.cgi?id=40184

Differential Revision: https://reviews.llvm.org/D58752

llvm-svn: 355136
2019-02-28 20:38:45 +00:00
Abderrazek Zaafrani abfd10807c [AArch64] Improve FP16 vector convert from short instructions.
https://reviews.llvm.org/D58563

llvm-svn: 355134
2019-02-28 20:21:46 +00:00
Manman Ren 1829512dd3 Add a module pass for order file instrumentation
The basic idea of the pass is to use a circular buffer to log the execution ordering of the functions. We only log the function when it is first executed. We use a 8-byte hash to log the function symbol name.

In this pass, we add three global variables:
(1) an order file buffer: a circular buffer at its own llvm section.
(2) a bitmap for each module: one byte for each function to say if the function is already executed.
(3) a global index to the order file buffer.

At the function prologue, if the function has not been executed (by checking the bitmap), log the function hash, then atomically increase the index.

Differential Revision:  https://reviews.llvm.org/D57463

llvm-svn: 355133
2019-02-28 20:13:38 +00:00
Rong Xu a6ff69f6dd [PGO] Context sensitive PGO (part 2)
Part 2 of CSPGO changes (mostly related to ProfileSummary).
Note that I use a default parameter in setProfileSummary() and getSummary().
This is to break the dependency in clang. I will make the parameter explicit
after changing clang in a separated patch.

Differential Revision: https://reviews.llvm.org/D54175

llvm-svn: 355131
2019-02-28 19:55:07 +00:00
Sanjay Patel 7fc6ef7dd7 [x86] scalarize extract element 0 of FP math
This is another step towards ensuring that we produce the optimal code for reductions,
but there are other potential benefits as seen in the tests diffs:

  1. Memory loads may get scalarized resulting in more efficient code.
  2. Memory stores may get scalarized resulting in more efficient code.
  3. Complex ops like fdiv/sqrt get scalarized which may be faster instructions depending on uarch.
  4. Even simple ops like addss/subss/mulss/roundss may result in faster operation/less frequency throttling when scalarized depending on uarch.

The TODO comment suggests 1 or more follow-ups for opcodes that can currently result in regressions.

Differential Revision: https://reviews.llvm.org/D58282

llvm-svn: 355130
2019-02-28 19:47:04 +00:00
Jiong Wang 0a039660fa bpf: disassembler support for XADD under sub-register mode
Like the other load/store instructions, "w" register is preferred when
disassembling BPF_STX | BPF_W | BPF_XADD.

v1 -> v2:
 - Updated testcase insn-unit.s (Yonghong)

Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 355127
2019-02-28 19:22:34 +00:00
Jiong Wang 3da8bcd0a0 bpf: enable sub-register code-gen for XADD
Support sub-register code-gen for XADD is like supporting any other Load
and Store patterns.

No new instruction is introduced.

  lock *(u32 *)(r1 + 0) += w2

has exactly the same underlying insn as:

  lock *(u32 *)(r1 + 0) += r2

BPF_W width modifier has guaranteed they behave the same at runtime. This
patch merely teaches BPF back-end that BPF_W width modifier could work
GPR32 register class and that's all needed for sub-register code-gen
support for XADD.

test/CodeGen/BPF/xadd.ll updated to include sub-register code-gen tests.

A new testcase test/CodeGen/BPF/xadd_legal.ll is added to make sure the
legal case could pass on all code-gen modes. It could also test dead Def
check on GPR32. If there is no proper handling like what has been done
inside BPFMIChecking.cpp:hasLivingDefs, then this testcase will fail.

Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 355126
2019-02-28 19:21:28 +00:00
Jiong Wang 3d7c265e11 bpf: improve dead Defs check for XADD
BPF XADD semantics require all Defs of XADD are dead, meaning any result of
XADD insn is not used.

However, BPF backend hasn't enabled sub-register liveness track, so when
the source and destination operands of XADD are GPR32, there is no
sub-register dead info. If we rely on the generic
MachineInstr::allDefsAreDead, then we will raise false alarm on GPR32 Def.
This was fine as there was no sub-register code-gen support for XADD which
will be added by the next patch.

To support GPR32 Def, ideally we could just enable sub-registr liveness
track on BPF backend, then allDefsAreDead could work on GPR32 Def. This
requires implementing TargetSubtargetInfo::enableSubRegLiveness on BPF.

However, sub-register liveness tracking module inside LLVM is actually
designed for the situation where one register could be split into more
than one sub-registers for which case each sub-register could have their
own liveness and kill one of them doesn't kill others. So, tracking
liveness for each make sense.

For BPF, each 64-bit register could only have one 32-bit sub-register. This
is exactly the case which LLVM think brings no benefits for doing
sub-register tracking, because the live range of sub-register must always
equal to its parent register, therefore liveness tracking is disabled even
the back-end has implemented enableSubRegLiveness. The detailed information
is at r232695:

  Author: Matthias Braun <matze@braunis.de>
  Date:   Thu Mar 19 00:21:58 2015 +0000
  Do not track subregister liveness when it brings no benefits

Hence, for BPF, we enhance MachineInstr::allDefsAreDead. Given the solo
sub-register always has the same liveness as its parent register, LLVM is
already attaching a implicit 64-bit register Def whenever the there is
a sub-register Def. The liveness of the implicit 64-bit Def is available.
For example, for "lock *(u32 *)(r0 + 4) += w9", the MachineOperand info
could be:

  $w9 = XADDW32 killed $r0, 4, $w9(tied-def 0),
                       implicit killed $r9, implicit-def dead $r9

Even though w9 is not marked as Dead, the parent register r9 is marked as
Dead correctly, and it is safe to use such information or our purpose.

v1 -> v2:
 - Simplified code logic inside hasLiveDefs. (Yonghong)

Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 355124
2019-02-28 19:20:26 +00:00
Sanjay Patel 4a47f5f550 [InstCombine] fold adds of constants separated by sext/zext
This is part of a transform that may be done in the backend:
D13757
...but it should always be beneficial to fold this sooner in IR
for all targets.

https://rise4fun.com/Alive/vaiW

  Name: sext add nsw
  %add = add nsw i8 %i, C0
  %ext = sext i8 %add to i32
  %r = add i32 %ext, C1
  =>
  %s = sext i8 %i to i32
  %r = add i32 %s, sext(C0)+C1

  Name: zext add nuw
  %add = add nuw i8 %i, C0
  %ext = zext i8 %add to i16
  %r = add i16 %ext, C1
  =>
  %s = zext i8 %i to i16
  %r = add i16 %s, zext(C0)+C1

llvm-svn: 355118
2019-02-28 19:05:26 +00:00
Craig Topper 38427c47b9 [X86] Don't peek through bitcasts before checking ISD::isBuildVectorOfConstantSDNodes in combineTruncatedArithmetic
We don't have any combines that can look through a bitcast to truncate a build vector of constants. So the truncate will stick around and give us something like this pattern (binop (trunc X), (trunc (bitcast (build_vector)))) which has two truncates in it. Which will be reversed by hoistLogicOpWithSameOpcodeHands in the generic DAG combiner. Thus causing an infinite loop.

Even if we had a combine for (truncate (bitcast (build_vector))), I think it would need to be implemented in getNode otherwise DAG combiner visit ordering would probably still visit the binop first and reverse it. Or combineTruncatedArithmetic would need to do its own constant folding.

Differential Revision: https://reviews.llvm.org/D58705

llvm-svn: 355116
2019-02-28 18:49:29 +00:00
Amara Emerson 8d70e6425c Revert "[AArch64][GlobalISel] Add support for 64 bit vector shuffle using TBL1."
Seems to break some neon intrinsics tests.

llvm-svn: 355115
2019-02-28 18:47:29 +00:00
Thomas Lively f3b4f99007 [WebAssembly] Remove uses of ThreadModel
Summary:
In the clang UI, replaces -mthread-model posix with -matomics as the
source of truth on threading. In the backend, replaces
-thread-model=posix with the atomics target feature, which is now
collected on the WebAssemblyTargetMachine along with all other used
features. These collected features will also be used to emit the
target features section in the future.

The default configuration for the backend is thread-model=posix and no
atomics, which was previously an invalid configuration. This change
makes the default valid because the thread model is ignored.

A side effect of this change is that objects are never emitted with
passive segments. It will instead be up to the linker to decide
whether sections should be active or passive based on whether atomics
are used in the final link.

Reviewers: aheejin, sbc100, dschuff

Subscribers: mehdi_amini, jgravelle-google, hiraditya, sunfish, steven_wu, dexonsmith, rupprecht, jfb, jdoerfert, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D58742

llvm-svn: 355112
2019-02-28 18:39:08 +00:00
Nikita Popov af2b0bef43 [ValueTracking] More accurate unsigned sub overflow detection
Second part of D58593.

Compute precise overflow conditions based on all known bits, rather
than just the sign bits. Unsigned a - b overflows iff a < b, and we
can determine whether this always/never happens based on the minimal
and maximal values achievable for a and b subject to the known bits
constraint.

llvm-svn: 355109
2019-02-28 18:04:20 +00:00
Chijun Sima 586187639a Make MergeBlockIntoPredecessor conformant to the precondition of calling DTU.applyUpdates
Summary:
It is mentioned in the document of DTU that "It is illegal to submit any update that has already been submitted, i.e., you are supposed not to insert an existent edge or delete a nonexistent edge." It is dangerous to violet this rule because DomTree and PostDomTree occasionally crash on this scenario.

This patch fixes `MergeBlockIntoPredecessor`, making it conformant to this precondition.

Reviewers: kuhar, brzycki, chandlerc

Reviewed By: brzycki

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58444

llvm-svn: 355105
2019-02-28 16:47:18 +00:00
Amara Emerson 85c3afd7f6 [AArch64][GlobalISel] Add support for 64 bit vector shuffle using TBL1.
This extends the existing support for shufflevector to handle cases like
<2 x float>, which we can implement by concating the vectors and using a TBL1.

Differential Revision: https://reviews.llvm.org/D58684

llvm-svn: 355104
2019-02-28 16:43:11 +00:00
Kadir Cetinkaya 1b1b1a6135 [Target][ARM] Add a usage for SrcSz to unbreak build-bots without assertions
llvm-svn: 355101
2019-02-28 15:55:11 +00:00
Bjorn Pettersson d30f308a9f Add support for computing "zext of value" in KnownBits. NFCI
Summary:
The description of KnownBits::zext() and
KnownBits::zextOrTrunc() has confusingly been telling
that the operation is equivalent to zero extending the
value we're tracking. That has not been true, instead
the user has been forced to explicitly set the extended
bits as known zero afterwards.

This patch adds a second argument to KnownBits::zext()
and KnownBits::zextOrTrunc() to control if the extended
bits should be considered as known zero or as unknown.

Reviewers: craig.topper, RKSimon

Reviewed By: RKSimon

Subscribers: javed.absar, hiraditya, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58650

llvm-svn: 355099
2019-02-28 15:45:29 +00:00
Stefan Pintilie a073a18460 [PowerPC] Removed STATISTIC that was causing build errors.
llvm-svn: 355087
2019-02-28 12:40:28 +00:00
Stefan Pintilie bd5429ef38 [PowerPC] Move the stack pointer update instruction later in the prologue and earlier in the epilogue.
Move the stdu instruction in the prologue and epilogue.
This should provide a small performance boost in functions that are able to do
this. I've kept this change rather conservative at the moment and functions
with frame pointers or base pointers will not try to move the stack pointer
update.

Differential Revision: https://reviews.llvm.org/D42590

llvm-svn: 355085
2019-02-28 12:23:28 +00:00
Simon Pilgrim 134bc19079 [X86][AVX] Remove superfluous insert_subvector(zero, bitcast(x)) -> bitcast(insert_subvector(zero, x)) fold
This is caught by other existing bitcast folds.

llvm-svn: 355084
2019-02-28 11:39:52 +00:00
Diana Picus cf0ff638bc [ARM GlobalISel] Make arm_i32imm an IntImmLeaf
This gets rid of some duplication in the TableGen definition, but it
forces us to keep both a pointer and a reference to the subtarget in the
ARMInstructionSelector. That is pretty ugly but it might be a reasonable
trade-off, since the TableGen descriptions should outlive the code in
the selector (or in the worst case we can update to use just the
reference when we get rid of DAGISel).

Differential Revision: https://reviews.llvm.org/D58031

llvm-svn: 355083
2019-02-28 11:13:05 +00:00
Simon Pilgrim 87aeff8bbb [X86][AVX] Fold vf64 concat_vectors(movddup(x),movddup(x)) -> broadcast(x)
llvm-svn: 355078
2019-02-28 10:53:58 +00:00
Diana Picus 3b7beafc77 [ARM GlobalISel] Support global variables for Thumb2
Add the same level of support as for ARM mode (i.e. still no TLS
support).

In most cases, it is sufficient to replace the opcodes with the
t2-equivalent, but there are some idiosyncrasies that I decided to
preserve because I don't understand the full implications:
* For ARM we use LDRi12 to load from constant pools, but for Thumb we
  use t2LDRpci (I'm not sure if the ideal would be to use t2LDRi12 for
  Thumb as well, or to use LDRcp for ARM).
* For Thumb we don't have an equivalent for MOV|LDRLIT_ga_pcrel_ldr, so
  we have to generate MOV|LDRLIT_ga_pcrel plus a load from GOT.

The tests are in separate files because they're hard enough to read even
without doubling the number of checks.

llvm-svn: 355077
2019-02-28 10:42:47 +00:00
Nikita Popov 6c57395fb4 [ValueTracking] More accurate unsigned add overflow detection
Part of D58593.

Compute precise overflow conditions based on all known bits, rather
than just the sign bits. Unsigned a + b overflows iff a > ~b, and we
can determine whether this always/never happens based on the minimal
and maximal values achievable for a and ~b subject to the known bits
constraint.

llvm-svn: 355072
2019-02-28 08:11:20 +00:00
Craig Topper 6ca7398a1e [X86] Use PreprocessISelDAG to convert vector sra/srl/shl to the X86 specific variable shift ISD opcodes.
These allows use to use the same set of isel patterns for sra/srl/shl which are undefined for out of range shifts and intrinsic shifts which aren't undefined.

Doing this late allows DAG combine to have every opportunity to optimize the sra/srl/shl nodes.

This removes about 7000 bytes from the isel table and simplies the td files.

llvm-svn: 355071
2019-02-28 07:21:26 +00:00
Richard Trieu b37a70f40e Fix IR/Analysis layering issue with OptBisect
OptBisect is in IR due to LLVMContext using it.  However, it uses IR units from
Analysis as well.  This change moves getDescription functions from OptBisect
to their respective IR units.  Generating names for IR units will now be up
to the callers, keeping the Analysis IR units in Analysis.  To prevent
unnecessary string generation, isEnabled function is added so that callers know
when the description needs to be generated.

Differential Revision: https://reviews.llvm.org/D58406

llvm-svn: 355068
2019-02-28 04:00:55 +00:00
Alexandre Ganea 14d58f5986 Fix SupportTests.exe/AllocationTests/MappedMemoryTest.AllocAndReleaseHuge when the machine doesn't have large pages enabled.
llvm-svn: 355067
2019-02-28 03:42:07 +00:00
Alexandre Ganea 68c4827660 Fix non-Windows platforms build break introduced by r355065. Fixes:
In file included from /home/buildbots/ppc64le-lld-multistage-test/ppc64le-lld-multistage-test/llvm/lib/Support/Memory.cpp:14:
/home/buildbots/ppc64le-lld-multistage-test/ppc64le-lld-multistage-test/llvm/include/llvm/Support/Memory.h:38:14: error: private field 'Flags' is not used [-Werror,-Wunused-private-field]
    unsigned Flags = 0;
             ^
1 error generated.

llvm-svn: 355066
2019-02-28 03:03:07 +00:00
Alexandre Ganea b05ba93578 [Memory] Add basic support for large/huge memory pages
This patch introduces Memory::MF_HUGE_HINT which indicates that allocateMappedMemory() shall return a pointer to a large memory page.
However the flag is a hint because we're not guaranteed in any way that we will get back a large memory page. There are several restrictions:

- Large/huge memory pages aren't enabled by default on modern OSes (Windows 10 and Linux at least), and should be manually enabled/reserved.
- Once enabled, it should be kept in mind that large pages are physical only, they can't be swapped.
- Memory fragmentation can affect the availability of large pages, especially after running the OS for a long time and/or running along many other applications.

Memory::allocateMappedMemory() will fallback to 4KB pages if it can't allocate 2MB large pages (if Memory::MF_HUGE_HINT is provided)

Currently, Memory::MF_HUGE_HINT only works on Windows. The hint will be ignored on Linux, 4KB pages will always be returned.

Differential Revision: https://reviews.llvm.org/D58718

llvm-svn: 355065
2019-02-28 02:47:34 +00:00
Eric Christopher 07944353fc Temporarily revert "ArgumentPromotion should copy all metadata to new Function" and the dependent patch "Refine ArgPromotion metadata handling" as they're causing segfaults in argument promotion.
This reverts commits r354032 and r353537.

llvm-svn: 355060
2019-02-28 01:11:12 +00:00
Craig Topper 240315aa64 [X86] Use X86::LAST_VALID_COND instead of assuming X86::COND_S is the last encoding. NFC
llvm-svn: 355059
2019-02-28 01:00:31 +00:00
Matt Arsenault 09a09ef8b7 AMDGPU: Fix typo
llvm-svn: 355056
2019-02-28 00:52:33 +00:00
Matt Arsenault 5d567dc137 AMDGPU: Enable function calls by default
Fixes some crashes on illegal call situations which are unfortunately
still valid IR.

llvm-svn: 355051
2019-02-28 00:40:32 +00:00
Abderrazek Zaafrani 2fc498a652 [AArch64] Generate FP16 vector compare instructions.
https://reviews.llvm.org/D58561

llvm-svn: 355050
2019-02-28 00:31:38 +00:00
Matt Arsenault aa03bcd23c AMDGPU: Fix crashes in invalid call cases
We have to at least tolerate calls to kernels, possibly with a
mismatched calling convention on the callsite.

llvm-svn: 355049
2019-02-28 00:28:44 +00:00
Matt Arsenault d3093c2f1f GlobalISel: Implement fewerElementsVector for phi
llvm-svn: 355048
2019-02-28 00:16:32 +00:00
Matt Arsenault 72bcf15dbf GlobalISel: Implement moreElementsVector for phi
llvm-svn: 355047
2019-02-28 00:01:05 +00:00
Reid Kleckner 4fb3502bc9 [InstrProf] Use separate comdat group for data and counters
Summary:
I hadn't realized that instrumentation runs before inlining, so we can't
use the function as the comdat group. Doing so can create relocations
against discarded sections when references to discarded __profc_
variables are inlined into functions outside the function's comdat
group.

In the future, perhaps we should consider standardizing the comdat group
names that ELF and COFF use. It will save object file size, since
__profv_$sym won't appear in the symbol table again.

Reviewers: xur, vsk

Subscribers: eraman, hiraditya, cfe-commits, #sanitizers, llvm-commits

Tags: #clang, #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D58737

llvm-svn: 355044
2019-02-27 23:38:44 +00:00
Alina Sbirlea fcfa7c5f92 [MemorySSA] Make insertDef insert corresponding phi nodes.
Summary:
The original assumption for the insertDef method was that it would not
materialize Defs out of no-where, hence it will not insert phis needed
after inserting a Def.

However, when cloning an instruction (use case used in LICM), we do
materialize Defs "out of no-where". If the block receiving a Def has at
least one other Def, then no processing is needed. If the block just
received its first Def, we must check where Phi placement is needed.
The only new usage of insertDef is in LICM, hence the trigger for the bug.

But the original goal of the method also fails to apply for the move()
method. If we move a Def from the entry point of a diamond to either the
left or right blocks, then the merge block must add a phi.
While this usecase does not currently occur, or may be viewed as an
incorrect transformation, MSSA must behave corectly given the scenario.

Resolves PR40749 and PR40754.

Reviewers: george.burgess.iv

Subscribers: sanjoy, jlebar, Prazek, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58652

llvm-svn: 355040
2019-02-27 22:20:22 +00:00
Joerg Sonnenberger 6a198366a0 Default to Secure PLT on PPC for NetBSD and OpenBSD.
This matches the default settings of clang.

llvm-svn: 355038
2019-02-27 21:53:14 +00:00
Philip Reames 288a95fc8c Seperate volatility and atomicity/ordering in SelectionDAG
At the moment, we mark every atomic memory access as being also volatile. This is unnecessarily conservative and prohibits many legal transforms (DCE, folding, etc..).

This patch removes MOVolatile from the MachineMemOperands of atomic, but not volatile, instructions. This should be strictly NFC after a series of previous patches which have gone in to ensure backend code is conservative about handling of isAtomic MMOs. Once it's in and baked for a bit, we'll start working through removing unnecessary bailouts one by one. We applied this same strategy to the middle end a few years ago, with good success.

To make sure this patch itself is NFC, it is build on top of a series of other patches which adjust code to (for the moment) be as conservative for an atomic access as for a volatile access and build up a test corpus (mostly in test/CodeGen/X86/atomics-unordered.ll)..

Previously landed

    D57593 Fix a bug in the definition of isUnordered on MachineMemOperand
    D57596 [CodeGen] Be conservative about atomic accesses as for volatile
    D57802 Be conservative about unordered accesses for the moment
    rL353959: [Tests] First batch of cornercase tests for unordered atomics.
    rL353966: [Tests] RMW folding tests w/unordered atomic operations.
    rL353972: [Tests] More unordered atomic lowering tests.
    rL353989: [SelectionDAG] Inline a single use helper function, and remove last non-MMO interface
    rL354740: [Hexagon, SystemZ] Be super conservative about atomics
    rL354800: [Lanai] Be super conservative about atomics
    rL354845: [ARM] Be super conservative about atomics

Attention Out of Tree Backend Owners: This patch may break you. If it does, you can use the TLI getMMOFlags hook to restore the MOVolatile to any instruction you need to. (See llvm-dev thread titled "PSA: Changes to how atomics are handled in backends" started Feb 27, 2019.)

Differential Revision: https://reviews.llvm.org/D57601

llvm-svn: 355025
2019-02-27 20:20:08 +00:00
Simon Pilgrim 1001a6ab03 [X86][AVX] Pull out some INSERT_SUBVECTOR combines into a combineConcatVectorOps helper. NFCI
A lot of the INSERT_SUBVECTOR combines can be more generally handled as if they have come from a CONCAT_VECTORS node.

I've been investigating adding a CONCAT_VECTORS combine to X86, but this is a much easier first step that avoids the issue of handling a number of pre-legalization issues that I've encountered.

Differential Revision: https://reviews.llvm.org/D58583

llvm-svn: 355015
2019-02-27 18:46:32 +00:00
Alexey Lapshin d89d638055 Attempt to fix buildbot after r354972 [#1]. NFCI.
llvm-svn: 355013
2019-02-27 18:36:46 +00:00
Rong Xu 6cdf3d8086 Recommit r354930 "[PGO] Context sensitive PGO (part 1)"
Fixed UBSan failures.

llvm-svn: 355005
2019-02-27 17:24:33 +00:00
Eugene Leviant 7f78d4712f [DebugInfo] Apply subprogram attributes on behalf of owner CU
When using full LTO it is possible that template function definition DIE
is bound to one compilation unit and it's declaration to another. We should
add function declaration attributes on behalf of its owner CU otherwise
we may end up with malformed file identifier in function declaration 
DW_AT_decl_file attribute.

Differential revision: https://reviews.llvm.org/D58538

llvm-svn: 354978
2019-02-27 14:46:59 +00:00
Dmitry Preobrazhensky 7904231edb [AMDGPU][MC] Added register size check for VOP3/SDWA/DPP operands
See bug 37943: https://bugs.llvm.org/show_bug.cgi?id=37943

Reviewers: artem.tamazov, arsenm, rampitec

Differential Revision: https://reviews.llvm.org/D58287

llvm-svn: 354974
2019-02-27 13:58:48 +00:00
Alexey Lapshin 77fc1f6049 [DebugInfo] add SectionedAddress to DebugInfo interfaces.
That patch is the fix for https://bugs.llvm.org/show_bug.cgi?id=40703
   "wrong line number info for obj file compiled with -ffunction-sections"
   bug. The problem happened with only .o files. If object file contains
   several .text sections then line number information showed incorrectly.
   The reason for this is that DwarfLineTable could not detect section which
   corresponds to specified address(because address is the local to the
   section). And as the result it could not select proper sequence in the
   line table. The fix is to pass SectionIndex with the address. So that it
   would be possible to differentiate addresses from various sections. With
   this fix llvm-objdump shows correct line numbers for disassembled code.

   Differential review: https://reviews.llvm.org/D58194

llvm-svn: 354972
2019-02-27 13:17:36 +00:00
Dmitry Preobrazhensky ef92035827 [AMDGPU][MC][GFX8+] Added syntactic sugar for 'vgpr index' operand of instructions s_set_gpr_idx_on and s_set_gpr_idx_mode
See bug 39331: https://bugs.llvm.org/show_bug.cgi?id=39331

Reviewers: artem.tamazov, arsenm

Differential Revision: https://reviews.llvm.org/D58288

llvm-svn: 354969
2019-02-27 13:12:12 +00:00
Simon Pilgrim 71bb6850cf [X86][AVX] Only combine loads to broadcasts for legal types
Thanks to @echristo for spotting this.

llvm-svn: 354961
2019-02-27 11:17:25 +00:00
Yonghong Song cc290a9e91 [BPF] Don't fail for static variables
Currently, the LLVM will print an error like
  Unsupported relocation: try to compile with -O2 or above,
  or check your static variable usage
if user defines more than one static variables in a single
ELF section (e.g., .bss or .data).

There is ongoing effort to support static and global
variables in libbpf and kernel. This patch removed the
assertion so user programs with static variables won't
fail compilation.

The static variable in-section offset is written to
the "imm" field of the corresponding to-be-relocated
bpf instruction. Below is an example to show how the
application (e.g., libbpf) can relate variable to relocations.

  -bash-4.4$ cat g1.c
  static volatile long a = 2;
  static volatile int b = 3;
  int test() { return a + b; }
  -bash-4.4$ clang -target bpf -O2 -c g1.c
  -bash-4.4$ llvm-readelf -r g1.o

  Relocation section '.rel.text' at offset 0x158 contains 2 entries:
      Offset             Info             Type               Symbol's Value  Symbol's Name
  0000000000000000  0000000400000001 R_BPF_64_64            0000000000000000 .data
  0000000000000018  0000000400000001 R_BPF_64_64            0000000000000000 .data
  -bash-4.4$ llvm-readelf -s g1.o

  Symbol table '.symtab' contains 6 entries:
     Num:    Value          Size Type    Bind   Vis      Ndx Name
       0: 0000000000000000     0 NOTYPE  LOCAL  DEFAULT  UND
       1: 0000000000000000     0 FILE    LOCAL  DEFAULT  ABS g1.c
       2: 0000000000000000     8 OBJECT  LOCAL  DEFAULT    4 a
       3: 0000000000000008     4 OBJECT  LOCAL  DEFAULT    4 b
       4: 0000000000000000     0 SECTION LOCAL  DEFAULT    4
       5: 0000000000000000    64 FUNC    GLOBAL DEFAULT    2 test
  -bash-4.4$ llvm-objdump -d g1.o

  g1.o:   file format ELF64-BPF

  Disassembly of section .text:
  0000000000000000 test:
       0:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00         r1 = 0 ll
       2:       79 11 00 00 00 00 00 00         r1 = *(u64 *)(r1 + 0)
       3:       18 02 00 00 08 00 00 00 00 00 00 00 00 00 00 00         r2 = 8 ll
       5:       61 20 00 00 00 00 00 00         r0 = *(u32 *)(r2 + 0)
       6:       0f 10 00 00 00 00 00 00         r0 += r1
       7:       95 00 00 00 00 00 00 00         exit
  -bash-4.4$

  . from symbol table, static variable "a" is in section #4, offset 0.
  . from symbol table, static variable "b" is in section #4, offset 8.
  . the first relocation is against symbol #4:
    4: 0000000000000000     0 SECTION LOCAL  DEFAULT    4
    and in-section offset 0 (see llvm-objdump result)
  . the second relocation is against symbol #4:
    4: 0000000000000000     0 SECTION LOCAL  DEFAULT    4
    and in-section offset 8 (see llvm-objdump result)
  . therefore, the first relocation is for variable "a", and
    the second relocation is for variable "b".

Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 354954
2019-02-27 05:36:15 +00:00
Vlad Tsyrklevich c01643087e Revert "[PGO] Context sensitive PGO (part 1)"
This reverts commit r354930, it was causing UBSan failures.

llvm-svn: 354953
2019-02-27 03:45:28 +00:00
Saleem Abdulrasool b67342e7cb Support: enable backtraces on Windows
Some platforms, e.g. Windows, support backtraces but don't have
BACKTRACE. Checking for BACKTRACE prevents Windows from having
backtraces.

Patch by Jason Mittertreiner!

llvm-svn: 354951
2019-02-27 03:21:50 +00:00
Heejin Ahn 82da1ffc16 [WebAssembly] Fix ScopeTops info in CFGStackify for EH pads
Summary:
When creating `ScopeTops` info for `try` ~ `catch` ~ `end_try`, we
should create not only `end_try` -> `try` mapping but also `catch` ->
`try` mapping as well. If this is not created, `block` and `end_block`
markers later added may span across an existing `catch`, resulting in
the incorrect code like:
```
try
  block     --|  (X)
catch         |
  end_block --|
end_try
```

Reviewers: dschuff

Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58605

llvm-svn: 354945
2019-02-27 01:35:14 +00:00
Jonas Devlieghere bb111152b7 [DWARFFormValue] Cleanup DWARFFormValue interface. (NFC)
DWARFFormValues can be created from a data extractor or by passing its
value directly. Until now this was done by member functions that
modified an existing object's internal state. This patch replaces a
subset of these methods with static method that return a new
DWARFFormValue.

llvm-svn: 354941
2019-02-27 00:58:09 +00:00
Heejin Ahn cf699b4534 [WebAssembly] Remove unnecessary instructions after TRY marker placement
Summary:
This removes unnecessary instructions after TRY marker placement. There
are two cases:
- `end`/`end_block` can be removed if they overlap with `try`/`end_try`
  and they have the same return types.
- `br` right before `catch` that branches to after `end_try` can be
  deleted.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58591

llvm-svn: 354939
2019-02-27 00:50:53 +00:00
Jonas Paulsson 129826cd9f [SystemZ] Pass regalloc hints to help Load-and-Test transformations.
Since there is no "Load-and-Test-High" instruction, the 32 bit load of a
register to be compared with 0 can only be implemented with LT if the virtual
GRX32 register ends up in a low part (GR32 register).

This patch detects these cases and passes the GR32 registers (low parts) as
(soft) hints in getRegAllocationHints().

Review: Ulrich Weigand.
llvm-svn: 354935
2019-02-27 00:18:28 +00:00
Vedant Kumar 73522d1678 [HotColdSplit] Disable splitting for sanitized functions
Splitting can make sanitizer errors harder to understand, as the
trapping instruction may not be in the function where the bug was
detected.

rdar://48142697

llvm-svn: 354931
2019-02-26 22:55:46 +00:00
Rong Xu 35d2d51369 [PGO] Context sensitive PGO (part 1)
Current PGO profile counts are not context sensitive. The branch probabilities
for the inlined functions are kept the same for all call-sites, and they might
be very different from the actual branch probabilities. These suboptimal
profiles can greatly affect some downstream optimizations, in particular for
the machine basic block placement optimization.

In this patch, we propose to have a post-inline PGO instrumentation/use pass,
which we called Context Sensitive PGO (CSPGO). For the users who want the best
possible performance, they can perform a second round of PGO instrument/use on
the top of the regular PGO. They will have two sets of profile counts. The
first pass profile will be manly for inline, indirect-call promotion, and
CGSCC simplification pass optimizations. The second pass profile is for
post-inline optimizations and code-gen optimizations.

A typical usage:
// Regular PGO instrumentation and generate pass1 profile.
> clang -O2 -fprofile-generate source.c -o gen
> ./gen
> llvm-profdata merge default.*profraw -o pass1.profdata
// CSPGO instrumentation.
> clang -O2 -fprofile-use=pass1.profdata -fcs-profile-generate -o gen2
> ./gen2
// Merge two sets of profiles
> llvm-profdata merge default.*profraw pass1.profdata -o profile.profdata
// Use the combined profile. Pass manager will invoke two PGO use passes.
> clang -O2 -fprofile-use=profile.profdata -o use

This change touches many components in the compiler. The reviewed patch
(D54175) will committed in phrases.

Differential Revision: https://reviews.llvm.org/D54175

llvm-svn: 354930
2019-02-26 22:37:46 +00:00
Stanislav Mekhanoshin da1628eb67 [AMDGPU] Fixed hang during DAG combine
SITargetLowering::reassociateScalarOps() does not touch constants
so that DAGCombiner::ReassociateOps() does not revert the combine.
However a global address is not a ConstantSDNode.

Switched to the method used by DAGCombiner::ReassociateOps() itself
to detect constants.

Differential Revision: https://reviews.llvm.org/D58695

llvm-svn: 354926
2019-02-26 20:56:25 +00:00
Eric Christopher 721eaeff3a Fix a small comment typo.
llvm-svn: 354923
2019-02-26 20:33:22 +00:00
Reid Kleckner 8fda7e15e6 [X86] Fix bug in vectorcall calling convention
Original implementation can't correctly handle __m256 and __m512 types
passed by reference through stack. This patch fixes it.

Patch by Wei Xiao!

Differential Revision: https://reviews.llvm.org/D57643

llvm-svn: 354921
2019-02-26 19:48:16 +00:00
Alina Sbirlea 9026404125 [MemorySSA & SimpleLoopUnswitch] Update MemorySSA in ReplaceUsesOfWith.
SimpleLoopUnswitch must update MemorySSA when removing instructions.
Resolves PR39197.

llvm-svn: 354919
2019-02-26 19:44:52 +00:00
Sanjay Patel 9dada83d6c [InstSimplify] remove zero-shift-guard fold for general funnel shift
As discussed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2019-February/130491.html

We can't remove the compare+select in the general case because
we are treating funnel shift like a standard instruction (as
opposed to a special instruction like select/phi).

That means that if one of the operands of the funnel shift is
poison, the result is poison regardless of whether we know that
the operand is actually unused based on the instruction's
particular semantics.

The motivating case for this transform is the more specific
rotate op (rather than funnel shift), and we are preserving the
fold for that case because there is no chance of introducing
extra poison when there is no anonymous extra operand to the
funnel shift.

llvm-svn: 354905
2019-02-26 18:26:56 +00:00
Petar Avramovic bd39569913 [MIPS GlobalISel] Select G_UADDO
Lower G_UADDO.
Legalize G_UADDO for MIPS32

Differential Revision: https://reviews.llvm.org/D58671

llvm-svn: 354900
2019-02-26 17:22:42 +00:00
Ganesh Gopalasubramanian e172d7008d [X86] AMD znver2 enablement
This patch enables the following

1) AMD family 17h "znver2" tune flag (-march, -mcpu).
2) ISAs that are enabled for "znver2" architecture.
3) For the time being, it uses the znver1 scheduler model.
4) Tests are updated.
5) Scheduler descriptions are yet to be put in place.

Reviewers: craig.topper

Differential Revision: https://reviews.llvm.org/D58343

llvm-svn: 354897
2019-02-26 16:55:10 +00:00
Jonas Paulsson c110b5b69f [SystemZ] Wait with selection of legal vector/FP constants until Select().
This patch aims to make sure that any such constant that can be generated
with a vector instruction (for example VGBM) is recognized as such during
legalization and kept as a target independent node through post-legalize
DAGCombining.

Two new functions named isVectorConstantLegal() and loadVectorConstant()
replace old ways of handling vector/FP constants.

A new struct named SystemZVectorConstantInfo is used to cache the results of
isVectorConstantLegal() and pass them onto loadVectorConstant().

Support for fp128 constants in the presence of FeatureVectorEnhancements1
(z14) has been added.

Review: Ulrich Weigand
https://reviews.llvm.org/D58270

llvm-svn: 354896
2019-02-26 16:47:59 +00:00
Sanjay Patel e8bf0f79bd [InstCombine] canonicalize more unsigned saturated add with 'not'
Yet another pattern variation suggested by:
https://bugs.llvm.org/show_bug.cgi?id=14613

There are 8 more potential commuted patterns here on top of the
8 that were already handled (rL354221, rL354276, rL354393).
We have the obvious commute of the 'add' + commute of the cmp
predicate/operands (ugt/ult) + commute of the select operands:

Name: base
%notx = xor i32 %x, -1
%a = add i32 %notx, %y
%c = icmp ult i32 %x, %y
%r = select i1 %c, i32 -1, i32 %a
=>
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a

Name: ugt
%notx = xor i32 %x, -1
%a = add i32 %notx, %y
%c = icmp ugt i32 %y, %x
%r = select i1 %c, i32 -1, i32 %a
=>
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a

Name: commute select
%notx = xor i32 %x, -1
%a = add i32 %notx, %y
%c = icmp ult i32 %y, %x
%r = select i1 %c, i32 %a, i32 -1
=>
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a

Name: ugt + commute select
%notx = xor i32 %x, -1
%a = add i32 %notx, %y
%c = icmp ugt i32 %x, %y
%r = select i1 %c, i32 %a, i32 -1
=>
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a

https://rise4fun.com/Alive/den

llvm-svn: 354887
2019-02-26 15:18:49 +00:00
Nirav Dave 582d46328c [DAG] Fix constant store folding to handle non-byte sizes.
Avoid crashes from zero-byte values due to sub-byte store sizes.

Reviewers: uabelho, courbet, rnk

Reviewed By: courbet

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58626

llvm-svn: 354884
2019-02-26 15:02:32 +00:00
Simon Atanasyan 8cb497027d [mips] Emit `.module softfloat` directive
This change fixes crash on an assertion in case of using
`soft float` ABI for mips32r6 target.

llvm-svn: 354882
2019-02-26 14:45:17 +00:00
Andrea Di Biagio c032e2ab7c [MCA] Always check if scheduler resources are unavailable when reporting dispatch stalls.
Dispatch stall cycles may be associated to multiple dispatch stall events.
Before this patch, each stall cycle was associated with a single stall event.
This patch also improves a couple of code comments, and adds a helper method to
query the Scheduler for dispatch stalls.

llvm-svn: 354877
2019-02-26 14:19:00 +00:00
George Rimar b75bf8784e [yaml2obj][obj2yaml] - Add support for the architecture specific dynamic tags.
This allows tools to parse/dump the architecture specific tags
like DT_MIPS_*, DT_PPC64_* and DT_HEXAGON_*

Also fixes a bug in DynamicTags.def which was revealed in this patch.

Differential revision: https://reviews.llvm.org/D58667

llvm-svn: 354876
2019-02-26 14:14:49 +00:00
Igor Kudrin 2d3faad706 [llvm-objdump] Implement -Mreg-names-raw/-std options.
The --disassembler-options, or -M, are used to customize
the disassembler and affect its output.

The two implemented options allow selecting register names on ARM:
* With -Mreg-names-raw, the disassembler uses rNN for all registers.
* With -Mreg-names-std it prints sp, lr and pc for r13, r14 and r15,
  which is the default behavior of llvm-objdump.

Differential Revision: https://reviews.llvm.org/D57680

llvm-svn: 354870
2019-02-26 12:15:14 +00:00
Luke Cheeseman 9e285bef2b [ARM] Add Cortex-M35P
- Add LLVM backend support for Cortex-M35P
- Documentation can be found at
  https://developer.arm.com/products/processors/cortex-m/cortex-m35p

Differentail Revision: https://reviews.llvm.org/D57763

llvm-svn: 354868
2019-02-26 12:02:12 +00:00
Simon Pilgrim 566177c3d5 [LegalizeDAG] Use APInt::getSplat helper to create bitreverse masks. NFCI.
llvm-svn: 354867
2019-02-26 11:44:23 +00:00
Simon Pilgrim 810fa04ac7 [LegalizeDAG] Expand SADDO/SSUBO using SADDSAT/SSUBSAT (PR37763)
If SADDSAT/SSUBSAT are legal, then we can expand SADDO/SSUBO by performing a ADD/SUB and a SADDO/SSUBO and then compare the results.

I looked at doing this for UADDO/USUBO as well but as we don't have to do as many range comparisons I didn't see any/much benefit.

Differential Revision: https://reviews.llvm.org/D58637

llvm-svn: 354866
2019-02-26 11:27:53 +00:00
Eugene Leviant 24b3d258bb [ThinLTO] Use defined node and edge order when dumping DOT file
Differential revision: https://reviews.llvm.org/D58631

llvm-svn: 354850
2019-02-26 07:38:21 +00:00
Vlad Tsyrklevich c6d54ae9da Revert "Improve "llvm-nm -f sysv" output for Elf files"
This reverts commit r354833, it was causing ASan test failures on
sanitizer-x86_64-linux-fast.

llvm-svn: 354849
2019-02-26 07:04:56 +00:00
Dan Gohman c71132c0be [WebAssembly] Properly align fp128 arguments in outgoing varargs arguments
For outgoing varargs arguments, it's necessary to check the OrigAlign field
of the corresponding OutputArg entry to determine argument alignment, rather
than just computing an alignment from the argument value type. This is
because types like fp128 are split into multiple argument values, with
narrower types that don't reflect the ABI alignment of the full fp128.

This fixes the printf("printfL: %4.*Lf\n", 2, lval); testcase.

Differential Revision: https://reviews.llvm.org/D58656

llvm-svn: 354846
2019-02-26 05:20:19 +00:00
Philip Reames 38b14e33a8 [ARM] Be super conservative about atomics
As requested during review of D57601 <https://reviews.llvm.org/D57601> https://reviews.llvm.org/D57601, be equally conservative for atomic MMOs as for volatile MMOs in all in tree backends. At the moment, all atomic MMOs are also volatile, but I'm about to change that.

Differential Revision: https://reviews.llvm.org/D58490

Note: D58498 landed in several pieces as individual backends were approved.  This is the last chunk.
llvm-svn: 354845
2019-02-26 04:30:33 +00:00
Heejin Ahn d2a56ac661 [WebAssembly] Fix a bug deleting instruction in a ranged for loop
Summary: We shouldn't delete elements while iterating a ranged for loop.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58519

llvm-svn: 354844
2019-02-26 04:08:49 +00:00
Aaron Smith 1d5f8632d7 [CodeView] Emit HasConstructorOrDestructor class option for non-trivial constructors
Reviewers: zturner, rnk, llvm-commits, aleksandr.urakov

Reviewed By: zturner, rnk

Subscribers: jdoerfert, majnemer, asmith

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D44406

llvm-svn: 354841
2019-02-26 03:23:56 +00:00
Reid Kleckner 8b6af00173 [llvm-cov] Fix llvm-cov on Windows and un-XFAIL test
Summary:
The llvm-cov tool needs to be able to find coverage names in the
executable, so the .lprfn and .lcovmap sections cannot be merged into
.rdata.

Also, the linker merges .lprfn$M into .lprfn, so llvm-cov needs to
handle that when looking up sections. It has to support running on both
relocatable object files and linked PE files.

Lastly, when loading .lprfn from a PE file, llvm-cov needs to skip the
leading zero byte added by the profile runtime.

Reviewers: vsk

Subscribers: hiraditya, #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D58661

llvm-svn: 354840
2019-02-26 02:30:00 +00:00
Reid Kleckner 2f055f026a [X86] Fix bug in x86_intrcc with arg copy elision
Summary:
Use a custom calling convention handler for interrupts instead of fixing
up the locations in LowerMemArgument. This way, the offsets are correct
when constructed and we don't need to account for them in as many
places.

Depends on D56883

Replaces D56275

Reviewers: craig.topper, phil-opp

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D56944

llvm-svn: 354837
2019-02-26 02:11:25 +00:00
Sunil Srivastava d72d16f444 Improve "llvm-nm -f sysv" output for Elf files
Specifically, compute and Print Type and Section columns.

Differential Revision: https://reviews.llvm.org/D58263

llvm-svn: 354833
2019-02-26 00:19:39 +00:00
Matt Arsenault 752579736e RegBankSelect: Handle slightly more complex value mappings
Try to use concat_vectors. Also remove unnecessary assert on
pointers. Fixes asserting for <4 x s16> operations and 64-bit pointers
for AMDGPU.

llvm-svn: 354828
2019-02-25 22:24:13 +00:00
Matt Arsenault f4bfe4cd17 AMDGPU/GlobalISel: Fix bit ops for non-power-of-2 sizes
llvm-svn: 354825
2019-02-25 21:32:48 +00:00
Matt Arsenault 82b103998b AMDGPU/GlobalISel: Clamp max implicit_def elements
llvm-svn: 354818
2019-02-25 20:46:06 +00:00
Matt Arsenault 0b14857415 RegisterScavenger: Allow fail without spill
AMDGPU wants to use this in some contexts where
the spilling is either impossible, or a worse alternative
to doing something else.

llvm-svn: 354816
2019-02-25 20:29:04 +00:00
Matt Arsenault f97ace5639 AMDGPU: Remove IntrReadMem from memtime/memrealtime intrinsics
EarlyCSE with MemorySSA was able to use this to merge multiple calls
with no intervening store.

llvm-svn: 354814
2019-02-25 20:16:11 +00:00
Craig Topper 316c58e8f1 [X86] Improve detection of unneeded shift amount masking to also handle the case that the LHS has known zeroes in it
If the LHS has known zeros, the RHS immediate will have had bits removed. So call computeKnownBits to get the known zeroes so we can handle this case.

Differential Revision: https://reviews.llvm.org/D58475

llvm-svn: 354811
2019-02-25 19:42:47 +00:00
Andrea Di Biagio 4a1e59a6e0 Fix a sign compare warning breaking the -Werror build.
The warning was introduced at r354793.

llvm-svn: 354810
2019-02-25 19:33:58 +00:00
Matt Arsenault fd6fd00773 AMDGPU: Correct definitions for bitset instructions
These really read and write the result register, so these need a tied
input.

llvm-svn: 354809
2019-02-25 19:24:46 +00:00
Nikita Popov fcbd7f6495 [Mips] Fix missing masking in fast-isel of br (PR40325)
Fixes https://bugs.llvm.org/show_bug.cgi?id=40325 by zero extending
(and x, 1) the condition before branching on it.

To avoid regressing trivial cases, I'm combining emission of cmp+br
sequences for the single-use + same block case (similar to what we
do in x86). icmpbr1.ll still regresses due to the cross-bb usage
of the condition.

Differential Revision: https://reviews.llvm.org/D58576

llvm-svn: 354808
2019-02-25 18:54:17 +00:00
Amara Emerson 6bcfa1c419 [AArch64][GlobalISel] Refactor selectBuildVector to use MachineIRBuilder. NFC.
This is a preparatory change as I want to use emitScalarToVector() elsewhere,
and in general we want to transition to MIRBuilder instead of using BuildMI
directly.

Differential Revision: https://reviews.llvm.org/D58528

llvm-svn: 354807
2019-02-25 18:52:54 +00:00
Philip Reames a64de6720b [Lanai] Be super conservative about atomics
As requested during review of D57601 <https://reviews.llvm.org/D57601>, be equally conservative for atomic MMOs as for volatile MMOs in all in tree backends. At the moment, all atomic MMOs are also volatile, but I'm about to change that.

Reviewed as part of https://reviews.llvm.org/D58490, with other backends still pending review.

llvm-svn: 354800
2019-02-25 17:36:10 +00:00
Simon Pilgrim 80d0e9c563 [SelectionDAG] Add demanded elts variants to isConstOrConstSplat helpers. NFCI.
These helpers extend the existing isConstOrConstSplat helper checks to support DemandedElts masks as well.

We already had a local version of this in SelectionDAG that computeKnownBits/ComputeNumSignBits made use of, but this adds the functionality directly to the BuildVectorSDNode node and extends isConstOrConstSplat etc. to use that.

This will allow us to reuse the functionality in SimplifyDemandedVectorElts/SimplifyDemandedBits.

Differential Revision: https://reviews.llvm.org/D58503

llvm-svn: 354797
2019-02-25 16:31:58 +00:00
Simon Pilgrim 28441ac75f [DAGCombine] Add undef shuffle elt support to partitionShuffleOfConcats
Support undef shuffle mask indices in the shuffle(concat_vectors, concat_vectors) -> concat_vectors fold

Differential Revision: https://reviews.llvm.org/D58585

llvm-svn: 354793
2019-02-25 16:02:01 +00:00
David Green b504f104b2 [ARM] Add some more missing T1 opcodes for the peephole optimisier
This adds a few extra Thumb1 opcodes to improve the peephole opimisers
ability to remove redundant cmp instructions. tADC and tSBC require
a small fixup to prevent MOVS being moved past the instruction, giving
the wrong flags.

Differential Revision: https://reviews.llvm.org/D58281

llvm-svn: 354791
2019-02-25 15:50:54 +00:00
Simon Pilgrim a066f1f9e6 [Vectorizer] Add vectorization support for fixed smul/umul intrinsics
This requires a couple of tweaks to existing vectorization functions as they were assuming that only the second call argument (ctlz/cttz/powi) could ever be the 'always scalar' argument, but for smul.fix + umul.fix its the third argument.

Differential Revision: https://reviews.llvm.org/D58616

llvm-svn: 354790
2019-02-25 15:42:02 +00:00
Luke Cheeseman 59f77e7891 [AArch64] Add support for Cortex-A76 and Cortex-A76AE
- Add LLVM backend support for Cortex-A76 and Cortex-A76AE
- Documentation can be found at
  https://developer.arm.com/products/processors/cortex-a/cortex-a76

llvm-svn: 354788
2019-02-25 15:08:27 +00:00
Simon Pilgrim c61f1e8e6c [X86] Merge ISD::ADD/SUB nodes into X86ISD::ADD/SUB equivalents (PR40483)
Avoid ADD/SUB instruction duplication by reusing the X86ISD::ADD/SUB results.

Includes ADD commutation - I tried to include NEG+SUB SUB commutation as well but this causes regressions as we don't have good combine coverage to simplify X86ISD::SUB.

Differential Revision: https://reviews.llvm.org/D58597

llvm-svn: 354771
2019-02-25 11:19:37 +00:00
James Henderson fd99780c09 [yaml2obj]Re-allow dynamic sections to have raw content
Recently, support was added to yaml2obj to allow dynamic sections to
have a list of entries, to make it easier to write tests with dynamic
sections. However, this change also removed the ability to provide
custom contents to the dynamic section, making it hard to test
malformed contents (e.g. because the section is not a valid size to
contain an array of entries). This change reinstates this. An error is
emitted if raw content and dynamic entries are both specified.

Reviewed by: grimar, ruiu

Differential Review: https://reviews.llvm.org/D58543

llvm-svn: 354770
2019-02-25 11:02:24 +00:00
Simon Tatham b70fc0c5fd [ARM] Make fullfp16 instructions not conditionalisable.
More or less all the instructions defined in the v8.2a full-fp16
extension are defined as UNPREDICTABLE if you put them in an IT block
(Thumb) or use with any condition other than AL (ARM). LLVM didn't
know that, and was happy to conditionalise them.

In order to force these instructions to count as not predicable, I had
to make a small Tablegen change. The code generation back end mostly
decides if an instruction was predicable by looking for something it
can identify as a predicate operand; there's an isPredicable bit flag
that overrides that check in the positive direction, but nothing that
overrides it in the negative direction.

(I considered the alternative approach of actually removing the
predicate operand from those instructions, but thought that it would
be more painful overall for instructions differing only in data type
to have different shapes of operand list. This way, the only code that
has to notice the difference is the if-converter.)

So I've added an isUnpredicable bit alongside isPredicable, and set
that bit on the right subset of FP16 instructions, and also on the
VSEL, VMAXNM/VMINNM and VRINT[ANPM] families which should be
unpredicable for all data types.

I've included a couple of representative regression tests, both of
which previously caused an fp16 instruction to be conditionalised in
ARM state and (with -arm-no-restrict-it) to be put in an IT block in
Thumb.

Reviewers: SjoerdMeijer, t.p.northover, efriedma

Reviewed By: efriedma

Subscribers: jdoerfert, javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D57823

llvm-svn: 354768
2019-02-25 10:39:53 +00:00
Craig Topper 8c9724ea4f [SelectionDAG] Add a OPC_CheckChild2CondCode to SelectionDAGISel to remove a MoveChild and MoveParent pair.
OPC_CheckCondCode is always used as operand 2 of a setcc. And its always surrounded by a MoveChild2 and a MoveParent. By having a dedicated opcode for this case we can reduce the number of bytes needed for this pattern from 4 bytes to 2.

This saves ~3000 bytes in the X86 table.

llvm-svn: 354763
2019-02-25 03:11:44 +00:00
Kang Zhang 4faa4090c9 [PowerPC] [PowerPC] Enhance the fast selection of fptoi & fptrunc instruction and clean up related asserts
Summary:
Fast selection of llvm fptoi & fptrunc instructions is not handled well about
VSX instruction support.
We'd use VSX float convert integer instruction instead of non-vsx float convert
integer instruction if the operand register class is VSSRC or VSFRC because i32
and i64 are mapped to VSSRC and VSFRC correspondingly if VSX feature is
openeded.
For float trunc instruction, we do this silimar work like float convert integer
instruction to try to use VSX instruction.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D58430

llvm-svn: 354762
2019-02-25 02:46:16 +00:00
Simon Pilgrim cfaf663a35 [X86] Combine zext(packus(x),packus(y)) -> concat(x,y) (PR39637)
Its proving tricky to combine shuffles across multiple vector sizes, so for now I'm adding this more specific combine - the pattern is common enough to be worth it as a first step.

llvm-svn: 354757
2019-02-24 19:57:52 +00:00
Craig Topper 3fe4bd464c [X86] Fix tls variable lowering issue with large code model
Summary:
The problem here is the lowering for tls variable. Below is the DAG for the code.
SelectionDAG has 11 nodes:

t0: ch = EntryToken
      t8: i64,ch = load<(load 8 from `i8 addrspace(257)* null`, addrspace 257)> t0, Constant:i64<0>, undef:i64
        t10: i64 = X86ISD::WrapperRIP TargetGlobalTLSAddress:i64<i32* @x> 0 [TF=10]
      t11: i64,ch = load<(load 8 from got)> t0, t10, undef:i64
    t12: i64 = add t8, t11
  t4: i32,ch = load<(dereferenceable load 4 from @x)> t0, t12, undef:i64
t6: ch = CopyToReg t0, Register:i32 %0, t4
And when mcmodel is large, below instruction can NOT be folded.

  t10: i64 = X86ISD::WrapperRIP TargetGlobalTLSAddress:i64<i32* @x> 0 [TF=10]
t11: i64,ch = load<(load 8 from got)> t0, t10, undef:i64
So "t11: i64,ch = load<(load 8 from got)> t0, t10, undef:i64" is lowered to " Morphed node: t11: i64,ch = MOV64rm<Mem:(load 8 from got)> t10, TargetConstant:i8<1>, Register:i64 $noreg, TargetConstant:i32<0>, Register:i32 $noreg, t0"

When llvm start to lower "t10: i64 = X86ISD::WrapperRIP TargetGlobalTLSAddress:i64<i32* @x> 0 [TF=10]", it fails.

The patch is to fold the load and X86ISD::WrapperRIP.

Fixes PR26906

Patch by LuoYuanke

Reviewers: craig.topper, rnk, annita.zhang, wxiao3

Reviewed By: rnk

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58336

llvm-svn: 354756
2019-02-24 19:33:37 +00:00
Craig Topper 5532a98737 [X86][SSE] Use pblendw for v4i32/v2i64 during isel.
Summary:

Previously we used BLENDPS/BLENDPD but that puts the blend in the FP domain. Under optsize, the two address instruction pass can cause blendps/blendpd to commute to blendps/blendpd. But we probably shouldn't do that if the original type was a integer. So use pblendw instead.

Reviewers: spatel, RKSimon

Reviewed By: RKSimon

Subscribers: jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58574

llvm-svn: 354755
2019-02-24 19:23:41 +00:00
Craig Topper ce2bd19c49 [X86] Correct some ADC/SBB with immediate scheduler data for Broadwell and Skylake.
Summary:
The AX/EAX/RAX with immediate forms are 2 uops just like the AL with immediate.

The modrm form with r8 and immediate is a single uop just like r16/r32/r64 with immediate.

Reviewers: RKSimon, andreadb

Reviewed By: RKSimon

Subscribers: gbedwell, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58581

llvm-svn: 354754
2019-02-24 19:23:39 +00:00
Craig Topper be3348573e [LegalizeTypes][AArch64][X86] Make type legalization of vector (S/U)ADD/SUB/MULO follow getSetCCResultType for the overflow bits. Make UnrollVectorOverflowOp properly convert from scalar boolean contents to vector boolean contents
Summary:
When promoting the over flow vector for these ops we should use the target's desired setcc result type. This way a v8i32 result type will use a v8i32 overflow vector instead of a v8i16 overflow vector. A v8i16 overflow vector will cause LegalizeDAG/LegalizeVectorOps to have to use v8i32 and truncate to v8i16 in its expansion. By doing this in type legalization instead, we get the truncate into the DAG earlier and give DAG combine more of a chance to optimize it.

We also have to fix unrolling to use the scalar setcc result type for the scalarized operation, and convert it to the required vector element type after the scalar operation. We have to observe the vector boolean contents when doing this conversion. The previous code was just taking the scalar result and putting it in the vector. But for X86 and AArch64 that would have only put a the boolean value in bit 0 of the element and left all other bits in the element 0. We need to ensure all bits in the element are the same. I'm using a select with constants here because that's what setcc unrolling in LegalizeVectorOps used.

Reviewers: spatel, RKSimon, nikic

Reviewed By: nikic

Subscribers: javed.absar, kristof.beyls, dmgreen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58567

llvm-svn: 354753
2019-02-24 19:23:36 +00:00
Simon Pilgrim 4f4f9abdfa [X86][AVX] Rename lowerShuffleByMerging128BitLanes to lowerShuffleAsLanePermuteAndRepeatedMask. NFC.
Name better matches the other similar 'lane permute' and 'repeated mask' functions we have.

llvm-svn: 354749
2019-02-24 17:30:06 +00:00
Sanjay Patel 9907d3c8b4 [InstCombine] canonicalize add/sub with bool
add A, sext(B) --> sub A, zext(B)

We have to choose 1 of these forms, so I'm opting for the
zext because that's easier for value tracking.

The backend should be prepared for this change after:
D57401
rL353433

This is also a preliminary step towards reducing the amount
of bit hackery that we do in IR to optimize icmp/select.
That should be waiting to happen at a later optimization stage.

The seeming regression in the fuzzer test was discussed in:
D58359

We were only managing that fold in instcombine by luck, and
other passes should be able to deal with that better anyway.

llvm-svn: 354748
2019-02-24 16:57:45 +00:00
Sanjay Patel cb04ba032f [CGP] add special-cases to form unsigned add with overflow (PR40486)
There's likely a missed IR canonicalization for at least 1 of these
patterns. Otherwise, we wouldn't have needed the pattern-matching
enhancement in D57516.

Note that -- unlike usubo added with D57789 -- the TLI hook for
this transform defaults to 'on'. So if there's any perf fallout
from this, targets should look at how they're lowering the uaddo
node in SDAG and/or override that hook.

The x86 diffs suggest that there's some missing pattern-matching
for forming inc/dec.

This should fix the remaining known problems in:
https://bugs.llvm.org/show_bug.cgi?id=40486
https://bugs.llvm.org/show_bug.cgi?id=31754

llvm-svn: 354746
2019-02-24 15:31:27 +00:00
Simon Pilgrim 9b49f36a03 Fix "enumeral and non-enumeral type in conditional expression" gcc7 warning. NFCI.
llvm-svn: 354745
2019-02-24 13:31:52 +00:00
Heejin Ahn 20cf0749cb [WebAssembly] Rename a variable in CFGStackify (NFC)
llvm-svn: 354744
2019-02-24 08:30:06 +00:00
Heejin Ahn 25d924b41f [WebAssembly] Merge two identical switch case routines into one (NFC)
llvm-svn: 354743
2019-02-24 08:19:55 +00:00
Philip Reames 33d7e49bb7 [Hexagon, SystemZ] Be super conservative about atomics
As requested during review of D57601, be equally conservative for atomic MMOs as for volatile MMOs in all in tree backends. At the moment, all atomic MMOs are also volatile, but I'm about to change that.

Reviewed as part of https://reviews.llvm.org/D58490, with other backends still pending review.  

llvm-svn: 354740
2019-02-24 00:45:09 +00:00
Duncan P. N. Exon Smith e7b9464943 VFS: Avoid some unnecessary std::string copies
Thread Twine a little deeper through the VFS to avoid unnecessarily
constructing the same std::string twice in a parameter sequence:

    Twine -> std::string -> StringRef -> std::string

Changing a few parameters from StringRef to Twine avoids the early call
to `Twine::str()`.

llvm-svn: 354739
2019-02-23 23:48:47 +00:00
Craig Topper dc185522fb [TwoAddressInstructionPass] After commuting an instruction and before trying to look for more commutable operands, resample the number of operands.
The new instruciton might have less operands than the original instruction. If we don't resample, the next loop iteration might read an operand that doesn't exist.

X86 can commute blends to movss/movsd which reduces from 4 operands to 3. This happened in the test case that caused r354363 & company to be reverted. A reduced version of that has been committed here.

Really this whole checking for more commutable operands is a little fragile. It assumes that the new instructions operands are the same order and positions as the original except for the pair that was swapped. I don't know of anything that breaks this assumption today, but I've left a fixme. Fixing this will likely require an interface change.

llvm-svn: 354738
2019-02-23 21:41:44 +00:00
Craig Topper be9eeb5526 Recommit r354363 "[X86][SSE] Generalize X86ISD::BLENDI support to more value types"
And its follow ups r354511, r354640.

A follow patch will fix the issue that caused it to be reverted.

llvm-svn: 354737
2019-02-23 21:41:42 +00:00
Craig Topper ccc860cb81 Recommit r354647 and r354648 "[LegalizeTypes] When promoting the result of EXTRACT_SUBVECTOR, also check if the input needs to be promoted. Use that to determine the element type to extract"
r354648 was a follow up to fix a regression "[X86] Add a DAG combine for (aext_vector_inreg (aext_vector_inreg X)) -> (aext_vector_inreg X) to fix a regression from my previous commit."

These were reverted in r354713 as their context depended on other patches that were reverted for a bug.

llvm-svn: 354734
2019-02-23 19:51:32 +00:00
Nikita Popov e661f946a7 [WebAssembly] Fix select of and (PR40805)
Fixes https://bugs.llvm.org/show_bug.cgi?id=40805 introduced by
patterns added in D53676.

I'm removing the patterns entirely here, as they are not correct
in the general case. If necessary something more specific can be
added in the future.

Differential Revision: https://reviews.llvm.org/D58575

llvm-svn: 354733
2019-02-23 18:59:01 +00:00
Simon Pilgrim f383a47b7d [X86][AVX] combineInsertSubvector - remove concat_vectors(load(x),load(x)) --> sub_vbroadcast(x)
D58053/rL354340 added this to EltsFromConsecutiveLoads directly

llvm-svn: 354732
2019-02-23 18:53:03 +00:00
Simon Pilgrim 398d0b9e96 Fix MSVC constant truncation warnings. NFCI.
llvm-svn: 354731
2019-02-23 18:49:02 +00:00
Simon Pilgrim e08f177ea2 [X86][AVX] concat_vectors(scalar_to_vector(x),scalar_to_vector(x)) --> broadcast(x)
For AVX1, limit this to i32/f32/i64/f64 loading cases only.

llvm-svn: 354730
2019-02-23 18:34:05 +00:00
Simon Pilgrim 31793733a0 [X86][AVX] Shuffle->Permute+Blend if we have one v4f64/v4i64 shuffle input in place
Even on AVX1 we can pretty cheaply (VPERM2F128+VSHUFPD) permute a single v4f64/v4i64 input (on AVX2 its just a single VPERMPD), followed by a BLENDPD.

llvm-svn: 354729
2019-02-23 17:10:47 +00:00
Craig Topper 75afc0105c [X86] Sign extend the 8-bit immediate when commuting blend instructions to match isel.
Conversion from ConstantSDNode to MachineInstr sign extends immediates from their APInt representation to int64_t.

This commit makes sure we do the same for commuting. The tests changes show how this improves CSE. This issue was made worse by the MachineCSE using commuteInstruction to undo a commute. So we virtually guarantee the sign extend from isel would be lost.

The improved CSE also occurred with r354363, but that was reverted. I'm working to undo the revert, but wanted to get this fix in while it was easy to see the results.

llvm-svn: 354724
2019-02-23 08:34:10 +00:00
Michael Trent 7dcfac6171 objdump fails to parse Mach-O binaries with n_desc bearing stabs
Summary:
The objdump Mach-O parser uses MachOObjectFile::checkSymbolTable() to
verify the symbol table is in a legal state before dereferencing the
offsets in the table. This routine missed a test for N_STAB symbols
when validating the two-level name space library ordinal for undefined
symbols. If the binary in question contained a value in the n_desc high
byte that is larger than the list of loaded dylibs, checkSymbolTable()
will flag the library ordinal as being out of range. Most of the time
the n_desc field is set to 0 or to small values, but old final linked
binaries exist with N_STAB symbols bearing non-trivial n_desc fields. 

The change here is simply to verify a symbol is not an N_STAB symbol
before consulting the values of n_other or n_desc.

rdar://44977336

Reviewers: lhames, pete, ab

Reviewed By: pete

Subscribers: llvm-commits, rupprecht

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58568

llvm-svn: 354722
2019-02-23 06:19:56 +00:00
Jordan Rupprecht 6387fa2715 [NFC] Fix typos: preceeding -> preceding
llvm-svn: 354715
2019-02-23 01:28:32 +00:00
Reid Kleckner e3876637cf Revert r354363 & co "[X86][SSE] Generalize X86ISD::BLENDI support to more value types"
r354363 caused https://crbug.com/934963#c1, which has a plain C reduced
test case.

I also had to revert some dependent changes:
- r354648
- r354647
- r354640
- r354511

llvm-svn: 354713
2019-02-23 01:19:42 +00:00
Craig Topper 62619d064d [LegalizeTypes] Use PromoteTargetBoolean in PromoteIntOp_ADDSUBCARRY instead of reimplementing it. NFCI
llvm-svn: 354710
2019-02-23 00:38:19 +00:00
Craig Topper a9697f24cf [X86] Enable custom splitting of v8i64/v16i32 sext/zext for avx/avx2 when input type will be promoted by the type legalize to 128-bits.
If the the input type will be promoted to 128 bits its better to put a sign_extend_inreg/and in the 128 bit register before the split occurs. Otherwise we end up doing it on each half in the wider register.

Some of the overflow arithmetic tests are regressions, but I think we can make some improvement using getSetccResultType in DAG combine and/or type legalization.

llvm-svn: 354709
2019-02-23 00:35:02 +00:00
Konstantin Zhuravlyov 9a278bf6b5 Revert "AMDGPU/NFC: Cleanup subtarget predicates"
It breaks one of our downstream merges, so revert it
temporarily while investigating failures downstream

llvm-svn: 354700
2019-02-22 23:21:06 +00:00
Sam Clegg 8fffa1dfa3 [WebAssembly] Remove unneeded MCSymbolRefExpr variants
We record the type of the symbol (event/function/data/global) in the
MCWasmSymbol and so it should always be clear how to handle a relocation
based on the symbol itself.

The exception is a function which still needs the special @TYPEINDEX
then the relocation contains the signature rather than the address
of the functions.

Differential Revision: https://reviews.llvm.org/D58472

llvm-svn: 354697
2019-02-22 22:29:34 +00:00
Sam Clegg ffba00bd47 [WebAssembly] MC: Handle aliases of aliases
Differential Revision: https://reviews.llvm.org/D58417

llvm-svn: 354694
2019-02-22 21:41:42 +00:00
Daniel Sanders 07cda257f8 Restore ability for C++ API users to Enable IPRA.
Summary:
Prior to r310876 one of our out-of-tree targets was enabling IPRA by modifying
the TargetOptions::EnableIPRA. This no longer works on current trunk since the
useIPRA() hook overrides any values that are set in advance. This patch adjusts
the behaviour of the hook so that API users and useIPRA() can both enable it
but useIPRA() cannot disable it if the API user already enabled it.

Reviewers: arsenm

Reviewed By: arsenm

Subscribers: wdng, mgorny, llvm-commits

Differential Revision: https://reviews.llvm.org/D38043

llvm-svn: 354692
2019-02-22 20:59:07 +00:00
Sanjay Patel ffe1cf5e92 [CGP] move overflow intrinsic insertion to common location; NFCI
We need to enhance the uaddo matching to handle special-cases
as seen in PR40486 and PR31754. That means we won't necessarily
have a def-use pattern, so we'll need to check dominance to
determine where to place the intrinsic (as we already do for
usubo). This preliminary patch is just rearranging the code,
so the planned follow-up to improve uaddo will be more clear.

llvm-svn: 354689
2019-02-22 20:20:24 +00:00
Matt Arsenault 7b55066a34 MIR: Preserve incoming frame index numbers
Don't skip incrementing the frame index number
if the object is dead. Instructions can still be
referencing the old frame index number, and this
doesn't attempt to remap those. The resulting
MIR then fails to load because the use instructions
use a higher frame index number than recorded
list of stack objects.

I'm not sure it's possible to craft a testcase
with the existing set of passes. It requires
selectively marking some stack objects
dead in an essentially random order.
StackSlotColoring condenses towards
the low indexes. This avoids a regression in a
future AMDGPU commit when some frame indexes
are lowered separately from PEI.

llvm-svn: 354688
2019-02-22 19:30:38 +00:00
Matt Arsenault 6d05d6a7b6 CodeGen: Make RegAllocRegistry a template class
Will allow re-using the machinery for independent
sets of register allocators.

This will allow AMDGPU to use separate command line
options for the allocator to use for SGPRs separate
from VGPRs.

llvm-svn: 354687
2019-02-22 19:16:52 +00:00
Matt Arsenault 476e26b5d3 AMDGPU: Use removeAllRegUnitsForPhysReg
llvm-svn: 354686
2019-02-22 19:03:36 +00:00
Sam Clegg a5e68748bf [WebAssembly] Remove debug statement submitted in rL354657
Subscribers: dschuff, jgravelle-google, hiraditya, aheejin, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58549

llvm-svn: 354684
2019-02-22 19:00:03 +00:00
Guozhi Wei 4c8e480358 [MBP] Factor out function hasViableTopFallthrough and enhancement
This patch factor out the function hasViableTopFallthrough from rotateLoop. It is also enhanced. Original code checks only if there is a block can be placed before current loop top. This patch also checks if the loop top is the most possible successor of its predecessor. The attached test case shows its effect.

Differential Revision: https://reviews.llvm.org/D58393

llvm-svn: 354682
2019-02-22 18:04:37 +00:00
Nirav Dave 46f939c118 Disable big-endian constant store merges from rL354676.
llvm-svn: 354677
2019-02-22 16:20:34 +00:00
Nirav Dave 44037d7a63 [DAGCombine] Fold overlapping constant stores
Fold a smaller constant store into larger constant stores immediately
preceeding it.

Reviewers: rnk, courbet

Subscribers: javed.absar, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58468

llvm-svn: 354676
2019-02-22 16:00:19 +00:00
Sanjay Patel a9e289174a [x86] allow narrowing of vector UINT_TO_FP
As discussed in:
D56864
D58197

Always use the narrow (128-bit) instruction when possible.
We already had the signed int version of this transform.

llvm-svn: 354675
2019-02-22 15:47:45 +00:00
Sanjay Patel 1baf7896cc [x86] simplify code in combineExtractSubvector; NFC
Only the 1st fold is attempted pre-legalization, but it requires
legal (simple) types too, so we don't need an EVT in any of the code.

llvm-svn: 354674
2019-02-22 15:28:22 +00:00
Matt Arsenault 65b4ab9921 BreakCriticalEdges: Update PostDominatorTree
llvm-svn: 354673
2019-02-22 15:01:41 +00:00
Petar Jovanovic 6083106b12 [mips][micromips] fix filling delay slots for PseudoIndirectBranch_MM
Filling a delay slot in 32bit jump instructions with a 16bit instruction
can cause issues. According to the documentation such an operation is
unpredictable.
This patch adds opcode Mips::PseudoIndirectBranch_MM alongside
Mips::PseudoIndirectBranch and other instructions that are expanded to jr
instruction and do not allow a 16bit instruction in their delay slots.

Patch by Mirko Brkusanin.

Differential Revision: https://reviews.llvm.org/D58507

llvm-svn: 354672
2019-02-22 14:53:58 +00:00
Roman Tereshin 99a6672bba [LowerSwitch][AMDGPU] Do not handle impossible values
This patch adds LazyValueInfo to LowerSwitch to compute the range of the
value being switched over and reduce the size of the tree LowerSwitch
builds to lower a switch.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D58096

llvm-svn: 354670
2019-02-22 14:33:46 +00:00
Chijun Sima 70e97163e0 [DTU] Refine the interface and logic of applyUpdates
Summary:
This patch separates two semantics of `applyUpdates`:
1. User provides an accurate CFG diff and the dominator tree is updated according to the difference of `the number of edge insertions` and `the number of edge deletions` to infer the status of an edge before and after the update.
2. User provides a sequence of hints. Updates mentioned in this sequence might never happened and even duplicated.

Logic changes:

Previously, removing invalid updates is considered a side-effect of deduplication and is not guaranteed to be reliable. To handle the second semantic, `applyUpdates` does validity checking before deduplication, which can cause updates that have already been applied to be submitted again. Then, different calls to `applyUpdates` might cause unintended consequences, for example,
```
DTU(Lazy) and Edge A->B exists.
1. DTU.applyUpdates({{Delete, A, B}, {Insert, A, B}}) // User expects these 2 updates result in a no-op, but {Insert, A, B} is queued
2. Remove A->B
3. DTU.applyUpdates({{Delete, A, B}}) // DTU cancels this update with {Insert, A, B} mentioned above together (Unintended)
```
But by restricting the precondition that updates of an edge need to be strictly ordered as how CFG changes were made, we can infer the initial status of this edge to resolve this issue.

Interface changes:
The second semantic of `applyUpdates`  is separated to `applyUpdatesPermissive`.
These changes enable DTU(Lazy) to use the first semantic if needed, which is quite useful in `transforms/utils`.

Reviewers: kuhar, brzycki, dmgreen, grosser

Reviewed By: brzycki

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58170

llvm-svn: 354669
2019-02-22 13:48:38 +00:00
David Green acb628b2af [ARM] Add some missing thumb1 opcodes to enable peephole optimisation of CMPs
This adds a number of missing Thumb1 opcodes so that the peephole optimiser can
remove redundant CMP instructions.

Reapplying this after the first attempt broke non-thumb1 code as the t2ADDri
instruction can be used with frame indices. In thumb1 we use tADDframe.

Differential Revision: https://reviews.llvm.org/D57833

llvm-svn: 354667
2019-02-22 12:23:31 +00:00
Diana Picus 35e1c6663c [ARM GlobalISel] Support floating point for Thumb2
This is exactly the same as arm mode, so for the instruction selector
tests we just extract them to a new file and run with the same checks
for both arm and thumb mode.

For the legalizer we need to update the tests for soft float a bit, but
only because BL and tBL are slightly different. We could be pedantic and
check that we get a well-formed BL for arm mode and a tBL for thumb, but
for the purposes of the legalizer test it's sufficient to just skip over
the predicate operands in the checks. Also note that we have the
pedantic checks in the divmod test, so we're covered.

llvm-svn: 354665
2019-02-22 09:54:54 +00:00
Heejin Ahn 85631d8b50 [WebAssembly] Remove getBottom function from CFGStackify (NFC)
Summary:
This removes `getBottom` function and the bookeeping map of <begin
marker instruction, bottom BB>.

Reviewers: dschuff

Subscribers: sunfish, sbc100, jgravelle-google, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58319

llvm-svn: 354657
2019-02-22 07:19:30 +00:00
Alina Sbirlea 90d2e3a16d [MemorySSA & LoopPassManager] Resolve PR40038.
The correct edge being deleted is not to the unswitched exit block, but to the
original block before it was split. That's the key in the map, not the
value.
The insert is correct. The new edge is to the .split block.

The splitting turns OriginalBB into:
OriginalBB -> OriginalBB.split.
Assuming the orignal CFG edge: ParentBB->OriginalBB, we must now delete
ParentBB->OriginalBB, not ParentBB->OriginalBB.split.

llvm-svn: 354656
2019-02-22 07:18:37 +00:00
Craig Topper fa6187d230 [LegalizeVectorOps] Improve the placement of ANDs in the ExpandLoad path for non-byte-sized loads.
When we need to merge two adjacent loads the AND mask for the low piece was still sized for the full src element size. But we didn't have that many bits. The upper bits are already zero due to the SRL. So we can skip the AND if we're going to combine with the high bits.

We do need an AND to clear out any bits from the high part. We were anding the high part before combining with the low part, but it looks like ANDing after the OR gets better results.

So we can just emit the final AND after the optional concatentation is done. That will handling skipping before the OR and get rid of extra high bits after the OR.

llvm-svn: 354655
2019-02-22 07:03:25 +00:00
Craig Topper 069cf05e87 [LegalizeVectorOps] Simplify the non-byte sized load handling VectorLegalizer::ExpandLoad. NFCI
Remove an if that should always be true. Merge the body of another into the only block that could make the if true.

llvm-svn: 354654
2019-02-22 06:18:33 +00:00
Chijun Sima f131d6110e [DTU] Deprecate insertEdge*/deleteEdge*
Summary: This patch converts all existing `insertEdge*/deleteEdge*` to `applyUpdates` and marks `insertEdge*/deleteEdge*` as deprecated.

Reviewers: kuhar, brzycki

Reviewed By: kuhar, brzycki

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58443

llvm-svn: 354652
2019-02-22 05:41:43 +00:00
Matt Arsenault 0280a5e143 DAG: Add helper for creating shifts with correct type
llvm-svn: 354649
2019-02-22 03:38:47 +00:00
Craig Topper 3a391fc0e8 [X86] Add a DAG combine for (aext_vector_inreg (aext_vector_inreg X)) -> (aext_vector_inreg X) to fix a regression from my previous commit.
Type legalization is causing two nodes to be created here, but we can use a single node to extend from v8i16 to v2i64.

llvm-svn: 354648
2019-02-22 01:49:53 +00:00
Craig Topper be22f329a9 [LegalizeTypes] When promoting the result of EXTRACT_SUBVECTOR, also check if the input needs to be promoted. Use that to determine the element type to extract.
Otherwise we end up creating extract_vector_elts that then each need to have their input promoted. This can lead to truncates needing to be emitted for each of those.

But we already emitted any_extends when we legalized the extract_subvector. So now we have pairs of any_extend+trunc that partially cancel. But depending on how DAGCombiner visits them we can get weird results.

By promoting the input at the same time we can create only a single any_extend or truncate.

There's one regression in the vector-narrow-binop.ll case, but that looks easy to fix with a follow up patch.

llvm-svn: 354647
2019-02-22 01:49:50 +00:00
Craig Topper 427404c769 [X86] Fix some copy/paste mistakes that caused a VR128 to be used as the address of a load in an isel pattern
This was introduced in r354511.

Fixes PR40811.

llvm-svn: 354640
2019-02-22 00:04:35 +00:00
Matt Arsenault aa6fb4c45e AMDGPU: Remove debugger related subtarget features
As far as I know these aren't needed anymore.

llvm-svn: 354634
2019-02-21 23:27:46 +00:00
Craig Topper 2b34fdc67f [X86] Remove hasSideEffects=1 from the X87 pseudos with folded load.
This was done in r321424 to prevent scheduling from reordering things. But now that we model FPCW as a dependency, I don't think the same scheduling we were trying to prevent can occur.

llvm-svn: 354628
2019-02-21 22:00:15 +00:00
Alina Sbirlea 97468e9282 [MemorySSA & LoopPassManager] Update MemorySSA in formDedicatedExitBlocks.
MemorySSA is now updated when forming dedicated exit blocks.
Resolves PR40037.

llvm-svn: 354623
2019-02-21 21:13:34 +00:00
Konstantin Zhuravlyov c2650178a1 AMDGPU/NFC: Cleanup subtarget predicates
Differential Revision: https://reviews.llvm.org/D58522

llvm-svn: 354620
2019-02-21 20:43:43 +00:00
Sanjay Patel 234a5e8ea4 [x86] vectorize more cast ops in lowering to avoid register file transfers
This is a follow-up to D56864.

If we're extracting from a non-zero index before casting to FP,
then shuffle the vector and optionally narrow the vector before doing the cast:

cast (extelt V, C) --> extelt (cast (extract_subv (shuffle V, [C...]))), 0

This might be enough to close PR39974:
https://bugs.llvm.org/show_bug.cgi?id=39974

Differential Revision: https://reviews.llvm.org/D58197

llvm-svn: 354619
2019-02-21 20:40:39 +00:00
Amara Emerson 1abe05c0dd Re-land "[AArch64][GlobalISel] Implement partial support for G_SHUFFLE_VECTOR""
Thanks to Richard Trieu for pointing out that the failures were due to a
use-after-free of an ArrayRef.

llvm-svn: 354616
2019-02-21 20:20:16 +00:00
Mandeep Singh Grang 096fae32b3 [llvm] Fix typo: 's/ ot / to /' [NFC]
llvm-svn: 354614
2019-02-21 20:04:20 +00:00
Alina Sbirlea d2d3244363 [LoopSimplifyCFG] Update MemorySSA after r353911.
Summary:
MemorySSA is not properly updated in LoopSimplifyCFG after recent changes. Use SplitBlock utility to resolve that and clear all updates once handleDeadExits is finished.
All updates that follow are removal of edges which are safe to handle via the removeEdge() API.
Also, deleting dead blocks is done correctly as is, i.e. delete from MemorySSA before updating the CFG and DT.

Reviewers: mkazantsev, rtereshin

Subscribers: sanjoy, jlebar, Prazek, george.burgess.iv, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58524

llvm-svn: 354613
2019-02-21 19:54:05 +00:00
Alina Sbirlea 73446cd567 [EarlyCSE] Cleanup deadcode. [NFCI]
Summary: Cleanup nop assignments.

Reviewers: george.burgess.iv, davide

Subscribers: sanjoy, jlebar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58308

llvm-svn: 354612
2019-02-21 19:49:57 +00:00
Krzysztof Parzyszek f6e875bacf [Hexagon] Use misaligned load instead of trap0(#0) for __builtin_trap
The trap instruction is intercepted by various runtime environments,
and instead of a crash it creates confusion.

This reapplies r354606 with a fix.

llvm-svn: 354611
2019-02-21 19:42:39 +00:00
Krzysztof Parzyszek 948c9f93c4 Revert r354606, it breaks asan tests
llvm-svn: 354609
2019-02-21 19:33:58 +00:00
Krzysztof Parzyszek 5f47fac3a2 [Hexagon] Use misaligned load instead of trap0(#0) for __builtin_trap
The trap instruction is intercepted by various runtime environments,
and instead of a crash it creates confusion.

llvm-svn: 354606
2019-02-21 18:39:22 +00:00
Mark Searles 599ce44d3f [AMDGPU] remove unused AssemblerPredicates
An internal build is hitting asserts complaining about too many subtarget
features:
  llvm/utils/TableGen/Types.cpp:42:
    const char* llvm::getMinimalTypeForEnumBitfield(uint64_t):
    Assertion `MaxIndex <= 64 && "Too many bits"' failed.

  llvm/utils/TableGen/AsmMatcherEmitter.cpp:1476:
    void {anonymous}::AsmMatcherInfo::buildInfo():
    Assertion `SubtargetFeatures.size() <= 64 && "Too many subtarget features!"'
    failed.

The short-term solution is to remove a few unused AssemblerPredicates to get
under the limit.

The long-term solution seems to be to revisit these asserts. E.g., rather than
hardcoded '64', use the standard sized std::bitset like the other places that
track subtarget features.

Differential Revision: https://reviews.llvm.org/D58516

llvm-svn: 354604
2019-02-21 18:19:54 +00:00
Sam Clegg 1ed3a0467c [WebAssembly] Don't create MSSymbolWasm object for non-symbols
`__linear_memory` and `__indirect_function_table` are both generated
as imports in wasm object files but are actually symbols and don't
appear in any symbols table or relocation entry.  Indeed we
don't have any symbol type to meaningfully represent either of them.

Differential Revision: https://reviews.llvm.org/D58487

llvm-svn: 354599
2019-02-21 17:05:19 +00:00
Sanjay Patel ba5ee817e9 [DAGCombiner] prevent infinite looping by truncating 'and' (PR40793)
This fold can occur during legalization, so it can fight with promotion
to the larger type. It apparently takes a special sequence and subtarget
to avoid more basic simplifications that would hide the problem.

But there's a bigger question raised here: why does distributeTruncateThroughAnd()
even exist? It duplicates functionality from a more minimal pattern that we
already have. But getting rid of this function requires some preliminary steps.

https://bugs.llvm.org/show_bug.cgi?id=40793

llvm-svn: 354594
2019-02-21 16:01:48 +00:00
Matt Arsenault 2e0ee47712 AMDGPU/GlobalISel: Make phis legal
llvm-svn: 354592
2019-02-21 15:48:13 +00:00
Matt Arsenault 8df2f3dab2 RegBankSelect: Allow targets to introduce control flow for mapping
For AMDGPU, if an operand requires an SGPR but is only available as a
VGPR, a loop needs to be introduced to execute the instruction with
each unique combination of values across all lanes. The rest of the
instructions in the block will be moved to a new block following the
loop. Check if the next instruction's parent changed, and update the
iterators and insertion block if this happened.

Tests will be included in a future patch.

llvm-svn: 354591
2019-02-21 15:48:13 +00:00
Nirav Dave dce91c1edb [X86] Fix copy-paste error in @ccz flag.
@ccz operand should be equivalent to @cce.

llvm-svn: 354588
2019-02-21 15:28:31 +00:00
Matt Arsenault b10fa8df3f AMDGPU/GlobalISel: Fix bit count ops for non-power-of-2 types
llvm-svn: 354587
2019-02-21 15:22:20 +00:00
Alex Bradbury db67be889d [RISCV][NFC] IsEligibleForTailCallOptimization -> isEligibleForTailCallOptimization
Also clang-format the modified hunks.

llvm-svn: 354584
2019-02-21 14:31:41 +00:00
Alex Bradbury 047170cfc3 [RISCV] Add implied zero offset load/store alias patterns
Allow load/store instructions with implied zero offset for compatibility with
GNU assembler.

Differential Revision: https://reviews.llvm.org/D57141
Patch by James Clarke.

llvm-svn: 354581
2019-02-21 14:09:34 +00:00
Joey Gouly fdf651ee8d [InferAddressSpaces] Fix fallthrough error
llvm-svn: 354580
2019-02-21 13:10:37 +00:00
Diana Picus dcaa939ab7 [ARM GlobalISel] Support G_FRAME_INDEX for Thumb2
Same as arm mode.

llvm-svn: 354579
2019-02-21 13:00:02 +00:00
Clement Courbet a0321c23e8 Re-land part of r354244 "[DAGCombiner] Eliminate dead stores to stack."
This part introduces the lifetime node.

llvm-svn: 354578
2019-02-21 12:59:36 +00:00
Joey Gouly 92af1360f3 [InferAddressSpaces] Fix crash on select of non-ptr operands
Check the operands of a select are pointers, to determine if it is an address
expression or not.

https://reviews.llvm.org/D58226

llvm-svn: 354576
2019-02-21 12:31:36 +00:00
Simon Pilgrim e6b338cbef [X86][SSE] combineX86ShufflesRecursively - moved to generic op input index lookup. NFCI.
We currently bail if the target shuffle decodes to more than 2 input vectors, this change alters the input index to work for any number of inputs for when we drop that requirement.

llvm-svn: 354575
2019-02-21 12:24:49 +00:00
George Rimar 623ae72ad4 [yaml2obj][obj2yaml] - Support SHT_GNU_verdef (.gnu.version_d) section.
This patch adds support for parsing/dumping the .gnu.version section.

Description of the section is: https://refspecs.linuxfoundation.org/LSB_1.3.0/gLSB/gLSB/symverdefs.html

Differential revision: https://reviews.llvm.org/D58437

llvm-svn: 354574
2019-02-21 12:21:43 +00:00
David Green 7a183a86be Revert 354564: [ARM] Add some missing thumb1 opcodes to enable peephole optimisation of CMPs
I believe it's causing bootstrap failures for A32 code. I'll take a look at
what's wrong.

llvm-svn: 354569
2019-02-21 11:03:13 +00:00
James Henderson 67bdfb0a59 [yaml2obj]Allow symbol Index field to take values lower than SHN_LORESERVE
In order to test tool handling of invalid section indexes, I need to
create an object containing such an invalid section index. I could
create a hex-edited binary, but having the ability to use yaml2obj is
preferable. Prior to this change, yaml2obj would reject any explicit
section indexes less than SHN_LORESERVE. This patch changes it to allow
any value.

I had to change the test to use llvm-readelf instead of llvm-readobj,
because llvm-readobj does not like invalid section indexes. I've also
expanded the test to show that the most common SHN_* values are accepted
(SHN_UNDEF, SHN_ABS, SHN_COMMON).

Reviewed by: grimar, jakehehrlich

Differential Revision: https://reviews.llvm.org/D58445

llvm-svn: 354566
2019-02-21 10:57:15 +00:00
David Spickett 8ac2b181a1 [AArch64] Print instruction before atomic semantic annotations
Commit r353303 added annotations when acquire semantics
were dropped from an instruction.

printAnnotation was called before printInstruction.
So if you didn't set a separate comment output stream
you got <comment><instr> instead of <instr><comment>
as expected.

To fix this move the new printAnnotation to after
the instruction is printed.

Differential Revision: https://reviews.llvm.org/D58059

llvm-svn: 354565
2019-02-21 10:42:49 +00:00
David Green 89efe24eba [ARM] Add some missing thumb1 opcodes to enable peephole optimisation of CMPs
This adds a number of missing Thumb1 opcodes so that the peephole optimiser can
remove redundant CMP instructions.

Differential Revision: https://reviews.llvm.org/D57833

llvm-svn: 354564
2019-02-21 10:30:09 +00:00
Fangrui Song 61cd368cdc [ObjectYAML] Support SHT_MIPS_DWARF section type flag
Also reorder SHT_MIPS_DWARF and SHT_MIPS_ABIFLAGS in Object/ELF.cpp.
The test will be added by D58457.

llvm-svn: 354563
2019-02-21 10:19:08 +00:00
Sam Parker 6ed47bee27 [ARM] Negative constants mishandled in ARM CGP
During type promotion, sometimes we convert negative an add with a
negative constant into a sub with a positive constant. The loop that
performs this transformation has two issues:
- it iterates over a set, causing non-determinism.
- it breaks, instead of continuing, when it finds the first
  non-negative operand.

Differential Revision: https://reviews.llvm.org/D58452

llvm-svn: 354557
2019-02-21 09:33:18 +00:00
Markus Lavin 76dda218a0 [DebugInfo] Prep llvm-dwarfdump for typed DW5 ops.
Adds llvm-dwarfdump support for pretty printing Dwarf5 expressions ops
that reference a base type (right now only DW_OP_convert is added).
Includes verification to verify that the ops operand is actually a
DW_TAG_base_type DIE.

Differential Revision: https://reviews.llvm.org/D58442

llvm-svn: 354552
2019-02-21 08:20:24 +00:00
Max Kazantsev 10489d76f6 [LoopSimplifyCFG] Add missing MSSA edge deletion
When we create fictive switch in preheader, we should take
care about MSSA and delete edge between old preheader and
header.

llvm-svn: 354547
2019-02-21 05:51:29 +00:00
Sam Clegg 1634516e35 [WebAssembly] Default to something reasonable in WebAssemblyAddMissingPrototypes
Previously if we couldn't derive a prototype for a "no-prototype"
function from C we would leave it as is:

  void foo(...)

With this change we instead give is an empty signature and remove
the "no-prototype" attribute.

This fixes the current wasm waterfall test failure.

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58488

llvm-svn: 354544
2019-02-21 03:27:00 +00:00
Stanislav Mekhanoshin 42e229e130 [AMDGPU] fix commuted case of sub combine
Differential Revision: https://reviews.llvm.org/D58481

llvm-svn: 354543
2019-02-21 02:58:00 +00:00
Wei Mi 500606f270 [Inliner] Pass nullptr for the ORE param of getInlineCost if RemarkEnabled
is false.

Right now for inliner and partial inliner, we always pass the address of a
valid ORE object to getInlineCost even if RemarkEnabled is false because of
no -Rpass is specified. Since ComputeFullInlineCost will be set to true if
ORE is non-null in getInlineCost, this introduces the problem that in
getInlineCost we cannot return early even if we already know the cost is
definitely higher than the threshold. It is a general problem for compile
time.

This patch fixes that by pass nullptr as the ORE argument if RemarkEnabled is
false.

Differential Revision: https://reviews.llvm.org/D58399

llvm-svn: 354542
2019-02-21 02:57:52 +00:00
Xin Tong 2d84c00dfa Add skipFunction to PostRA machine sinking pass.
Summary: Add skipFunction to PostRA machine sinking pass.

Reviewers: junbuml

Subscribers: arsenm, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D57847

llvm-svn: 354541
2019-02-21 02:11:06 +00:00
Amara Emerson 71f2a5e60f Revert "[AArch64][GlobalISel] Implement partial support for G_SHUFFLE_VECTOR"
This reverts r354521 because it broke the bots, but passes on Darwin somehow.

llvm-svn: 354532
2019-02-21 00:31:13 +00:00
Sam Clegg 6028c969ac [WebAssembly] Don't error on conflicting uses of prototype-less functions
When we can't determine with certainty the signature of a function
import we pick the fist signature we find rather than error'ing out.

The resulting program might not do what is expected since we might pick
the wrong signature.  However since undefined behavior in C to use the
same function with different signatures this seems better than refusing
to compile such programs.

Fixes PR40472

Differential Revision: https://reviews.llvm.org/D58304

llvm-svn: 354523
2019-02-20 22:40:57 +00:00
Amara Emerson a946d057b4 [AArch64][GlobalISel] Implement partial support for G_SHUFFLE_VECTOR
This change makes some basic type combinations for G_SHUFFLE_VECTOR legal, and
implements them with a very pessimistic TBL2 instruction in the selector.

For TBL2, support is also needed to generate constant pool entries and load from
them in order to materialize the mask register.

Currently supports <2 x s64> and <4 x s32> result types.

Differential Revision: https://reviews.llvm.org/D58466

llvm-svn: 354521
2019-02-20 22:11:39 +00:00
Sanjay Patel 198cc305e9 [CGP] match a special-case of unsigned subtract overflow
This is the 'sub0' (negate) pattern from PR31754:
https://bugs.llvm.org/show_bug.cgi?id=31754

llvm-svn: 354519
2019-02-20 21:23:04 +00:00
Nirav Dave 48cf37b55c [DAGCombine] Generalize Dead Store to overlapping stores.
Summary:
Remove stores that are immediately overwritten by larger
stores.

Reviewers: courbet, rnk

Reviewed By: rnk

Subscribers: javed.absar, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D58467

llvm-svn: 354518
2019-02-20 21:07:50 +00:00
Tom Stellard 79b5c3842b AMDGPU/GlobalISel: Move SMRD selection logic to TableGen
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: volkan, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D52922

llvm-svn: 354516
2019-02-20 21:02:37 +00:00
Craig Topper 8d9c224a8c [SelectionDAG] Teach GetDemandedBits to look at the known zeros of the LHS when handling ISD::AND
If the LHS has known zeros, then the RHS immediate mask might have been simplified to remove those bits.

This patch adds a call to computeKnownBits to get the known zeroes to handle that possibility. I left an early out to skip the call if all of the demanded bits are set in the mask.

Differential Revision: https://reviews.llvm.org/D58464

llvm-svn: 354514
2019-02-20 20:52:26 +00:00