Commit Graph

67845 Commits

Author SHA1 Message Date
Matt Arsenault 67ec8744d7 llc: Change behavior of -mattr with existing attribute
Append this to the existing target-features attribute on the function.

Some flags ignore existing attributes, and some overwrite them. Move
towards consistently respecting existing attributes if present. Since
target features act as a state machine on their own, append to the
function attribute. The backend default added feature list, function
attributes, and -mattr will all be appended together, and the later
features can individually toggle the earlier settings.
2020-01-15 19:46:01 -05:00
Stanislav Mekhanoshin 8b417dd3d6 Process BUNDLE in tail duplication
When tail duplication estimates a size of tail it uses instruction
count. Account for a number of instrictions in a bundle too.

Differential Revision: https://reviews.llvm.org/D72783
2020-01-15 15:46:57 -08:00
Vedant Kumar 360abb7ee5 [CodeExtractor] Transfer debug info to extracted function
After extracting, fix up debug info in both the old and new functions by

1) Pointing line locations and debug intrinsics to the new subprogram
   scope, and

2) Deleting intrinsics which point to values outside of the new
   function.

Depends on https://reviews.llvm.org/D72795.

Testing: check-llvm, check-clang, a build of LNT in the `-Os -g` config
with "-mllvm -hot-cold-split=1" set, and end-to-end debugging of a toy
program which undergoes splitting to verify that lldb can find
variables, single step, etc. in extracted code.

rdar://45507940

Differential Revision: https://reviews.llvm.org/D72801
2020-01-15 15:38:36 -08:00
Matt Arsenault 711a17afaf AMDGPU/GlobalISel: Select exp with patterns
This does produce slightly different code. Now a unique IMPLICIT_DEF
is emitted for each of the implicit_def operands, rather than reusing
the same one.
2020-01-15 18:33:15 -05:00
Matt Arsenault 25e9938a45 GlobalISel: Handle more cases of G_SEXT narrowing
This now develops the same problem G_ZEXT/G_ANYEXT have where the
requested type is assumed to be the source type. This will be fixed
separately by creating intermediate merges.
2020-01-15 18:33:15 -05:00
Vedant Kumar 5aeb6798f2 [test] Move call-site-entry-linking.test into test/tools/dsymutil/X86
This should fix a failure on the clang-cmake-armv7-quick bot.
2020-01-15 14:19:55 -08:00
Vedant Kumar f0120556c7 [DWARF] Emit DW_AT_call_return_pc as an address
This reverts D53469, which changed llvm's DWARF emission to emit
DW_AT_call_return_pc as a function-local offset. Such an encoding is not
compatible with post-link block re-ordering tools and isn't standards-
compliant.

In addition to reverting back to the original DW_AT_call_return_pc
encoding, teach lldb how to fix up DW_AT_call_return_pc when the address
comes from an object file pointed-to by a debug map. While doing this I
noticed that lldb's support for tail calls that cross a DSO/object file
boundary wasn't covered, so I added tests for that. This latter case
exercises the newly added return PC fixup.

The dsymutil changes in this patch were originally included in D49887:
the associated test should be sufficient to test DW_AT_call_return_pc
encoding purely on the llvm side.

Differential Revision: https://reviews.llvm.org/D72489
2020-01-15 13:02:23 -08:00
Craig Topper d6a9b7e589 [Mips] Add FileCheck to a test that just tested for a crash.
I believe the generated code here can suffer from double rounding.
So I wanted to capture the existing codegen so we can make
decisions about how to fix it.
2020-01-15 10:29:56 -08:00
Amara Emerson 2e39ea726e Revert "Revert rG6078f2fedcac5797ac39ee5ef3fd7a35ef1202d5 - "[AArch64][GlobalISel]: Support @llvm.{return,frame}address selection.""
The original change wasn't constraining the operand regclasses which broke EXPENSIVE_CHECKS.
2020-01-15 10:13:11 -08:00
Mark Murray da9d57d2c2 [ARM][MVE][Intrinsics] Add VMINAQ, VMINNMAQ, VMAXAQ, VMAXNMAQ intrinsics.
Summary: Add VMINAQ, VMINNMAQ, VMAXAQ, VMAXNMAQ intrinsics and unit tests.

Reviewers: simon_tatham, miyuki, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72761
2020-01-15 17:20:15 +00:00
Teresa Johnson 76b92cc7c1 Fix bot by adjusting wildcard matching
I noticed one bot failure due to
24a00ef240 because the wildcard matching
was not working as intended, fixed it to act similar to other checks of
CGSCCToFunctionPassAdaptor.
2020-01-15 08:37:15 -08:00
evgeny 10cadee5ce [ThinLTO] Always import constants
This patch imports constant variables even when they can't be internalized
(which results in promotion). This offers some extra constant folding
opportunities.

Differential revision: https://reviews.llvm.org/D70404
2020-01-15 19:29:01 +03:00
Arkady Shlykov 3f3017e162 [Loop Peeling] Add possibility to enable peeling on loop nests.
Summary:
Current peeling implementation bails out in case of loop nests.
The patch introduces a field in TargetTransformInfo structure that
certain targets can use to relax the constraints if it's
profitable (disabled by default).
Also additional option is added to enable peeling manually for
experimenting and testing purposes.

Reviewers: fhahn, lebedev.ri, xbolva00

Reviewed By: xbolva00

Subscribers: xbolva00, hiraditya, zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D70304
2020-01-15 08:25:21 -08:00
Sanjay Patel 3180af4362 [InstCombine] reassociate fsub+fsub into fsub+fadd
As discussed in the motivating PR44509:
https://bugs.llvm.org/show_bug.cgi?id=44509

...we can end up with worse code using fast-math than without.
This is because the reassociate pass greedily transforms fsub
into fneg/fadd and apparently (based on the regression tests
seen here) expects instcombine to clean that up if it wasn't
profitable. But we were missing this fold:

(X - Y) - Z --> X - (Y + Z)

There's another, more specific case that I think we should
handle as shown in the "fake" fneg test (but missed with a real
fneg), but that's another patch. That may be tricky to get
right without conflicting with existing transforms for fneg.

Differential Revision: https://reviews.llvm.org/D72521
2020-01-15 11:14:13 -05:00
Hubert Tong 63b428e386 DWARFDebugLine.cpp: Format unknown line number standard opcodes
Summary:
This patch implements `formatv()` formatting for `dwarf::LineNumberOps`
and makes use of it for the `llvm-dwarfdump --debug-line` dump.

Previously, unknown line number standard opcodes would lead to undefined
behaviour. The code would attempt to format the data pointer of an empty
`StringRef` (a null pointer) using `%s`. According to the description
for `format()`, use of that interface carries the "risk of `printf`".
Passing a null pointer in place of an array to a C library function
results in undefined behaviour.

Reviewers: jhenderson, daltenty, stevewan

Reviewed By: jhenderson

Subscribers: aprantl, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72369
2020-01-15 10:45:50 -05:00
Georgii Rymar 66a35d330b [llvm-readobj][test] - Cleanup SHT_RELR sections testing.
After recent changes (D71872) in yaml2obj, it is possible so cleanup
testing of the SHT_RELR sections.

Differential revision: https://reviews.llvm.org/D71874
2020-01-15 18:40:01 +03:00
Teresa Johnson 24a00ef240 Restore "[ThinLTO] Add additional ThinLTO pipeline testing with new PM"
This restores 2af97be802 (reverted at
6288f86e87), with all the fixes I had
applied at the time, along with a new fix for non-determinism in the
ordering of a couple of passes due to being accessed as parameters on
the same call.

I've also added --dump-input=fail to the new tests so I can more
thoroughly fix any additional failures.
2020-01-15 07:33:08 -08:00
Matt Arsenault 936483fb7d GlobalISel: Implement lower for G_BITCAST
Bitcast only really applies between scalars and vectors. Implement as
an unmerge and remerge. The test needs to tolerate failure since one
of the unmerges currently fails to legalize.
2020-01-15 08:58:58 -05:00
Matt Arsenault bd7658a212 AMDGPU: Partially directly select llvm.amdgcn.interp.p1.f16
The 16 bank LDS case is complicated due to using multiple
instructions. If I attempt to write a pattern for it, the generated
selector incorrectly places the copy to m0 after the first
instruction, so that needs to be separately addressed.

Also fix not gluing the copy to m0 to the second operation in the
second half of the 16 bank lowering.
2020-01-15 08:58:58 -05:00
Matt Arsenault 91715617ad GlobalISel: Fix narrowScalar for G_ANYEXT results
This is nearly the same as G_ZEXT.
2020-01-15 08:58:57 -05:00
Luís Marques 46e3edcc2c [RISCV] Fix test for inline asm z constraint modifier
Summary: Use an `i` constraint in the test, to correctly trigger the code for
handling the `z` constraint modifier.

Reviewers: asb, lenary, jrtc27
Reviewed By: lenary, jrtc27
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72134
2020-01-15 13:50:50 +00:00
Nemanja Ivanovic 9c64f04df8 [PowerPC] Legalize saturating vector add/sub
These intrinsics and the corresponding ISD nodes were recently added. PPC has
instructions that do this for vectors. Legalize them and add patterns to emit
the satuarting instructions.

Differential revision: https://reviews.llvm.org/D71940
2020-01-15 07:00:38 -06:00
Simon Pilgrim e26a78e708 Revert rG6078f2fedcac5797ac39ee5ef3fd7a35ef1202d5 - "[AArch64][GlobalISel]: Support @llvm.{return,frame}address selection."
These intrinsics expand to a variable number of instructions so just like in
ISelLowering.cpp we use custom code to deal with them.

Committing Tim's original patch.

Differential Revision: https://reviews.llvm.org/D65656
----
Breaks EXPENSIVE_CHECKS builds.
2020-01-15 12:37:37 +00:00
Zakk Chen 7bc58a779a [RISCV] Support ABI checking with per function target-features
if users don't specific -mattr, the default target-feature come
from IR attribute.

Reviewers: lenary, asb

Reviewed By: lenary, asb

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70837
2020-01-15 04:35:01 -08:00
Zakk Chen 3bc2860e92 Revert "[RISCV] Support ABI checking with per function target-features"
This reverts commit 109e4d12ed.
2020-01-15 04:32:57 -08:00
Arkady Shlykov 019c8d9d15 [NFC] Adjust test cases numbering, test commit.
Summary:
Test case test14 is missing, adjust the numbering to have a consecutive range.
Also a test commit to verify commit access.
2020-01-15 03:44:57 -08:00
Georgii Rymar ca6f616532 Revert "[yaml2obj/obj2yaml] - Add support for SHT_RELR sections."
This reverts commit 46d11e30ee.

It broke bots. E.g. http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast/builds/60744
2020-01-15 14:19:00 +03:00
Cullen Rhodes 93a4dede3a [AArch64][SVE] Add ptest intrinsics
Summary:
Implements the following intrinsics:

    * @llvm.aarch64.sve.ptest.any
    * @llvm.aarch64.sve.ptest.first
    * @llvm.aarch64.sve.ptest.last

Reviewers: sdesmalen, efriedma, dancgr, mgudim, cameron.mcinally, rengolin

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72398
2020-01-15 11:15:01 +00:00
Georgii Rymar 46d11e30ee [yaml2obj/obj2yaml] - Add support for SHT_RELR sections.
The encoded sequence of Elf*_Relr entries in a SHT_RELR section looks
like [ AAAAAAAA BBBBBBB1 BBBBBBB1 ... AAAAAAAA BBBBBB1 ... ]
i.e. start with an address, followed by any number of bitmaps. The address
entry encodes 1 relocation. The subsequent bitmap entries encode up to 63(31)
relocations each, at subsequent offsets following the last address entry.

More information is here:
https://github.com/llvm-mirror/llvm/blob/master/lib/Object/ELF.cpp#L272

This patch adds a support for these sections.

Differential revision: https://reviews.llvm.org/D71872
2020-01-15 13:54:08 +03:00
Zakk Chen 109e4d12ed [RISCV] Support ABI checking with per function target-features
if users don't specific -mattr, the default target-feature come
from IR attribute.
2020-01-15 02:30:43 -08:00
cdevadas 0dc6c249bf [AMDGPU] Invert the handling of skip insertion.
The current implementation of skip insertion (SIInsertSkip) makes it a
mandatory pass required for correctness. Initially, the idea was to
have an optional pass. This patch inserts the s_cbranch_execz upfront
during SILowerControlFlow to skip over the sections of code when no
lanes are active. Later, SIRemoveShortExecBranches removes the skips
for short branches, unless there is a sideeffect and the skip branch is
really necessary.

This new pass will replace the handling of skip insertion in the
existing SIInsertSkip Pass.

Differential revision: https://reviews.llvm.org/D68092
2020-01-15 15:18:16 +05:30
Kazushi (Jam) Marukawa 064859bde7 [VE] Minimal codegen for empty functions
Summary:
This patch implements minimal VE code generation for empty function bodies (no args, no value return).

Contents

* empty function code generation test.
* Minimal function prologue & epilogue emission
* Instruction formats and instruction definitions as far as required for the empty function prologue & epilogue.
* I64 register class definitions.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D72598
2020-01-15 09:55:16 +01:00
Craig Topper be8f217b18 [X86] Don't call LowerUINT_TO_FP_i32 for i32->f80 on 32-bit targets with sse2.
We were performing an emulated i32->f64 in the SSE registers, then
storing that value to memory and doing a extload into the X87
domain.

After this patch we'll now just store the i32 to memory along
with an i32 0. Then do a 64-bit FILD to f80 completely in the X87
unit. This matches what we do without SSE.
2020-01-15 00:43:07 -08:00
David Green 1b264a8263 [ARM] Reegenerate MVE tests. NFC
The mve-phireg.ll test no longer really tests what it was added for,
but the original case was fairly complex. I've left the test in as a
general codegen test.
2020-01-15 08:10:38 +00:00
Hideto Ueno 188f9a348d [Attributor] AAValueConstantRange: Value range analysis using constant range
Summary:
This patch introduces `AAValueConstantRange`, which answers a possible range for integer value in a specific program point.
One of the motivations is propagating existing `range` metadata. (I think we need to change the situation that `range` metadata cannot be put to Argument).

The state is a tuple of `ConstantRange` and it is initialized to (known, assumed) = ([-∞, +∞], empty).

Currently, AAValueConstantRange is created in `getAssumedConstant` method when `AAValueSimplify` returns `nullptr`(worst state).

Supported
 - BinaryOperator(add, sub, ...)
 - CmpInst(icmp eq, ...)
 - !range metadata

`AAValueConstantRange` is not intended to extend to polyhedral range value analysis.

Reviewers: jdoerfert, sstefan1

Reviewed By: jdoerfert

Subscribers: phosek, davezarzycki, baziotis, hiraditya, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71620
2020-01-15 16:34:23 +09:00
Philip Reames 1a7398eca2 [BranchAlign] Add master --x86-branches-within-32B-boundaries flag
This flag was originally part of D70157, but was removed as we carved away pieces of the review. Since we have the nop support checked in, and it appears mature(*), I think it's time to add the master flag. For now, it will default to nop padding, but once the prefix padding support lands, we'll update the defaults.

(*) I can now confirm that downstream testing of the changes which have landed to date - nop padding and compiler support for suppressions - is passing all of the functional testing we've thrown at it. There might still be something lurking, but we've gotten enough coverage to be confident of the basic approach.

Note that the new flag can be used either when assembling an .s file, or when using the integrated assembler directly from the compiler. The later will use all of the suppression mechanism and should always generate correct code. We don't yet have assembly syntax for the suppressions, so passing this directly to the assembler w/a raw .s file may result in broken code. Use at your own risk.

Also note that this isn't the wiring for the clang option. I think the most recent review for that is D72227, but I've lost track, so that might be off.

Differential Revision: https://reviews.llvm.org/D72738
2020-01-14 18:17:53 -08:00
Reid Kleckner 40cd26c700 [Win64] Handle FP arguments more gracefully under -mno-sse
Pass small FP values in GPRs or stack memory according the the normal
convention. This is what gcc -mno-sse does on Win64.

I adjusted the conditions under which we emit an error to check if the
argument or return value would be passed in an XMM register when SSE is
disabled. This has a side effect of no longer emitting an error for FP
arguments marked 'inreg' when targetting x86 with SSE disabled. Our
calling convention logic was already assigning it to FP0/FP1, and then
we emitted this error. That seems unnecessary, we can ignore 'inreg' and
compile it without SSE.

Reviewers: jyknight, aemerson

Differential Revision: https://reviews.llvm.org/D70465
2020-01-14 17:19:35 -08:00
Michael Liao 65c8abb14e [amdgpu] Fix typos in a test case.
- There are typos introduced due to merge.
2020-01-14 20:08:39 -05:00
Craig Topper 57eb56b839 [X86] Swap the 0 and the fudge factor in the constant pool for the 32-bit mode i64->f32/f64/f80 uint_to_fp algorithm.
This allows us to generate better code for selecting the fixup
to load.

Previously when the sign was set we had to load offset 0. And
when it was clear we had to load offset 4. This required a testl,
setns, zero extend, and finally a mul by 4. By switching the offsets
we can just shift the sign bit into the lsb and multiply it by 4.
2020-01-14 17:05:23 -08:00
Michael Liao 01a4b83154 [codegen,amdgpu] Enhance MIR DIE and re-arrange it for AMDGPU.
Summary:
- `dead-mi-elimination` assumes MIR in the SSA form and cannot be
  arranged after phi elimination or DeSSA. It's enhanced to handle the
  dead register definition by skipping use check on it. Once a register
  def is `dead`, all its uses, if any, should be `undef`.
- Re-arrange the DIE in RA phase for AMDGPU by placing it directly after
  `detect-dead-lanes`.
- Many relevant tests are refined due to different register assignment.

Reviewers: rampitec, qcolombet, sunfish

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72709
2020-01-14 19:26:15 -05:00
Michael Liao 8d07f8d98c [DAGCombine] Replace `getIntPtrConstant()` with `getVectorIdxTy()`.
- Prefer `getVectorIdxTy()` as the index operand type for
  `EXTRACT_SUBVECTOR` as targets expect different types by overloading
  `getVectorIdxTy()`.
2020-01-14 17:03:05 -05:00
Amara Emerson 6078f2fedc [AArch64][GlobalISel]: Support @llvm.{return,frame}address selection.
These intrinsics expand to a variable number of instructions so just like in
ISelLowering.cpp we use custom code to deal with them.

Committing Tim's original patch.

Differential Revision: https://reviews.llvm.org/D65656
2020-01-14 13:41:21 -08:00
Nikita Popov 04e586151e [InstCombine] Fix worklist management when removing guard intrinsic
When multiple guard intrinsics are merged into one, currently the
result of eraseInstFromFunction() is returned -- however, this
should only be done if the current instruction is being removed.
In this case we're removing a different instruction and should
instead report that the current one has been modified by returning it.

For this test case, this reduces the number of instcombine iterations
from 5 to 2 (the minimum possible).

Differential Revision: https://reviews.llvm.org/D72558
2020-01-14 21:47:48 +01:00
Danilo Carvalho Grael 26d96126a0 [SVE] Add patterns for MUL immediate instruction.
Summary: Add the missing MUL pattern for integer immediate instructions.

Reviewers: sdesmalen, huntergr, efriedma, c-rhodes, kmclaughlin

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits, amehsan

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72654
2020-01-14 15:26:19 -05:00
Nikita Popov 410331869d [NewPM] Port MergeFunctions pass
This ports the MergeFunctions pass to the NewPM. This was rather
straightforward, as no analyses are used.

Additionally MergeFunctions needs to be conditionally enabled in
the PassBuilder, but I left that part out of this patch.

Differential Revision: https://reviews.llvm.org/D72537
2020-01-14 20:55:41 +01:00
Nikita Popov 65c0805be5 [InstCombine] Fix infinite loop due to bitcast <-> phi transforms
Fix for https://bugs.llvm.org/show_bug.cgi?id=44245.

The optimizeBitCastFromPhi() and FoldPHIArgOpIntoPHI() end up
fighting against each other, because optimizeBitCastFromPhi()
assumes that bitcasts of loads will get folded. This doesn't
happen here, because a dangling phi node prevents the one-use
fold in https://github.com/llvm/llvm-project/blob/master/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp#L620-L628 from triggering.

This patch fixes the issue by explicitly performing the load
combine as part of the bitcast of phi transform. Other attempts
to force the load to be combined first were ultimately too
unreliable.

Differential Revision: https://reviews.llvm.org/D71164
2020-01-14 20:45:13 +01:00
Nikita Popov 652cd7c100 [InstCombine] Fix user iterator invalidation in bitcast of phi transform
This fixes the issue encountered in D71164. Instead of using a
range-based for, manually iterate over the users and advance the
iterator beforehand, so we do not skip any users due to iterator
invalidation.

Differential Revision: https://reviews.llvm.org/D72657
2020-01-14 20:38:10 +01:00
Nikita Popov fa63234093 [InstCombine] Add test for iterator invalidation bug; NFC 2020-01-14 20:38:10 +01:00
Sanjay Patel 57cb468514 [InstCombine] add test for possible cast-of-select transform; NFC 2020-01-14 14:23:14 -05:00
Jay Foad b777e551f0 [MachineScheduler] Reduce reordering due to mem op clustering
Summary:
Mem op clustering adds a weak edge in the DAG between two loads or
stores that should be clustered, but the direction of this edge is
pretty arbitrary (it depends on the sort order of MemOpInfo, which
represents the operands of a load or store). This often means that two
loads or stores will get reordered even if they would naturally have
been scheduled together anyway, which leads to test case churn and goes
against the scheduler's "do no harm" philosophy.

The fix makes sure that the direction of the edge always matches the
original code order of the instructions.

Reviewers: atrick, MatzeB, arsenm, rampitec, t.p.northover

Subscribers: jvesely, wdng, nhaehnle, kristof.beyls, hiraditya, javed.absar, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72706
2020-01-14 19:19:02 +00:00