Commit Graph

65467 Commits

Author SHA1 Message Date
Roman Lebedev 0a51e1f66d [InstCombine] dropRedundantMaskingOfLeftShiftInput(): pat. c/d/e with mask (PR42563)
Summary:
If we have a pattern `(x & (-1 >> maskNbits)) << shiftNbits`,
we already know (have a fold) that will drop the `& (-1 >> maskNbits)`
mask iff `(shiftNbits-maskNbits) s>= 0` (i.e. `shiftNbits u>= maskNbits`).

So even if `(shiftNbits-maskNbits) s< 0`, we can still
fold, we will just need to apply a **constant** mask afterwards:
```
Name: c, normal+mask
  %t0 = lshr i32 -1, C1
  %t1 = and i32 %t0, %x
  %r = shl i32 %t1, C2
=>
  %n0 = shl i32 %x, C2
  %n1 = i32 ((-(C2-C1))+32)
  %n2 = zext i32 %n1 to i64
  %n3 = lshr i64 -1, %n2
  %n4 = trunc i64 %n3 to i32
  %r = and i32 %n0, %n4
```
https://rise4fun.com/Alive/gslRa

Naturally, old `%masked` will have to be one-use.
This is not valid for pattern f - where "masking" is done via `ashr`.

https://bugs.llvm.org/show_bug.cgi?id=42563

Reviewers: spatel, nikic, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67725

llvm-svn: 372630
2019-09-23 17:04:28 +00:00
Roman Lebedev b4a1d8a84c [InstCombine] dropRedundantMaskingOfLeftShiftInput(): pat. a/b with mask (PR42563)
Summary:
And this is **finally** the interesting part of that fold!

If we have a pattern `(x & (~(-1 << maskNbits))) << shiftNbits`,
we already know (have a fold) that will drop the `& (~(-1 << maskNbits))`
mask iff `(maskNbits+shiftNbits) u>= bitwidth(x)`.
But that is actually ignorant, there's more general fold here:

In this pattern, `(maskNbits+shiftNbits)` actually correlates
with the number of low bits that will remain in the final value.
So even if `(maskNbits+shiftNbits) u< bitwidth(x)`, we can still
fold, we will just need to apply a **constant** mask afterwards:
```
Name: a, normal+mask
  %onebit = shl i32 -1, C1
  %mask = xor i32 %onebit, -1
  %masked = and i32 %mask, %x
  %r = shl i32 %masked, C2
=>
  %n0 = shl i32 %x, C2
  %n1 = add i32 C1, C2
  %n2 = zext i32 %n1 to i64
  %n3 = shl i64 -1, %n2
  %n4 = xor i64 %n3, -1
  %n5 = trunc i64 %n4 to i32
  %r = and i32 %n0, %n5
```
https://rise4fun.com/Alive/F5R

Naturally, old `%masked` will have to be one-use.
Similar fold exists for patterns c,d,e, will post patch later.

https://bugs.llvm.org/show_bug.cgi?id=42563

Reviewers: spatel, nikic, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67677

llvm-svn: 372629
2019-09-23 17:04:14 +00:00
Sanjay Patel 7414151929 [BreakFalseDeps] ignore function with minsize attribute
This came up in the x86-specific:
https://bugs.llvm.org/show_bug.cgi?id=43239
...but it is a general problem for the BreakFalseDeps pass.
Dependencies may be broken by adding some other instruction,
so that should be avoided if the overall goal is to minimize size.

Differential Revision: https://reviews.llvm.org/D67363

llvm-svn: 372628
2019-09-23 17:01:01 +00:00
Alexey Bataev 6a278d9073 [SLP] Fix for PR31847: Assertion failed: (isLoopInvariant(Operands[i], L) && "SCEVAddRecExpr operand is not loop-invariant!")
Summary:
Initially SLP vectorizer replaced all going-to-be-vectorized
instructions with Undef values. It may break ScalarEvaluation and may
cause a crash.
Reworked SLP vectorizer so that it does not replace vectorized
instructions by UndefValue anymore. Instead vectorized instructions are
marked for deletion inside if BoUpSLP class and deleted upon class
destruction.

Reviewers: mzolotukhin, mkuper, hfinkel, RKSimon, davide, spatel

Subscribers: RKSimon, Gerolf, anemet, hans, majnemer, llvm-commits, sanjoy

Differential Revision: https://reviews.llvm.org/D29641

llvm-svn: 372626
2019-09-23 16:25:03 +00:00
Dmitry Preobrazhensky 6784a3cd79 [AMDGPU][MC] Corrected handling of relocatable expressions
See bug 43359: https://bugs.llvm.org//show_bug.cgi?id=43359

Reviewers: rampitec

Differential Revision: https://reviews.llvm.org/D67829

llvm-svn: 372622
2019-09-23 15:41:51 +00:00
Krzysztof Parzyszek f97fdf5792 [Hexagon] Bitcast v4i16 to v8i8, unify no-op casts between scalar and HVX
llvm-svn: 372616
2019-09-23 14:33:27 +00:00
Sanjay Patel 31b9dfe23f [x86] fix assert with horizontal math + broadcast of vector (PR43402)
https://bugs.llvm.org/show_bug.cgi?id=43402

llvm-svn: 372606
2019-09-23 13:30:23 +00:00
Nico Weber da298aa913 llvm-undname: Add support for demangling typeinfo names
typeinfo names aren't symbols but string constant contents
stored in compiler-generated typeinfo objects, but llvm-cxxfilt
can demangle these for Itanium names.

In the MSVC ABI, these are just a '.' followed by a mangled
type -- this means they don't start with '?' like all MS-mangled
symbols do.

Differential Revision: https://reviews.llvm.org/D67851

llvm-svn: 372602
2019-09-23 13:13:37 +00:00
Djordje Todorovic ead96d73ac Revert "Reland "[utils] Implement the llvm-locstats tool""
This reverts commit rL372554.

llvm-svn: 372580
2019-09-23 11:04:11 +00:00
George Rimar 753f6cff2f [llvm-readobj] - Stop treating ".stack_sizes.*" sections as stack sizes sections.
llvm-readobj currently handles .stack_sizes.* (e.g. .stack_sizes.foo)
as a normal stack sizes section. Though MC does not produce sections with
such names. Also, linkers do not combine .stack_sizes.* into .stack_sizes.

A mini discussion about this correctness issue is here: https://reviews.llvm.org/D67757#inline-609274
This patch changes implementation so that only now only '.stack_sizes' name is
accepted as a real stack sizes section.

Differential revision: https://reviews.llvm.org/D67824

llvm-svn: 372578
2019-09-23 10:43:09 +00:00
George Rimar 4e0faa338b [llvm-readobj] - Implement LLVM-style dumping for .stack_sizes sections.
D65313 implemented GNU-style dumping (llvm-readelf).
This one implements LLVM-style dumping (llvm-readobj).

Differential revision: https://reviews.llvm.org/D67834

llvm-svn: 372576
2019-09-23 10:33:19 +00:00
Sam Parker 9feb429a33 [ARM][MVE] Remove old tail predicates
Remove any predicate that we replace with a vctp intrinsic, and try
to remove their operands too. Also look into the exit block to see if
there's any duplicates of the predicates that we've replaced and
clone the vctp to be used there instead.

Differential Revision: https://reviews.llvm.org/D67709

llvm-svn: 372567
2019-09-23 09:48:25 +00:00
Florian Hahn 3e2fdbee80 [AArch64] support neon_sshl and neon_ushl in performIntrinsicCombine.
Try to generate ushll/sshll for aarch64_neon_ushl/aarch64_neon_sshl,
if their first operand is extended and the second operand is a constant

Also adds a few tests marked with FIXME, where we can further increase
codegen.

Reviewers: t.p.northover, samparker, dmgreen, anemet

Reviewed By: anemet

Differential Revision: https://reviews.llvm.org/D62308

llvm-svn: 372565
2019-09-23 09:38:53 +00:00
Sam Parker 4ba6d0ded2 [ARM][LowOverheadLoops] Use subs during revert.
Check whether there are any uses or defs between the LoopDec and
LoopEnd. If there's not, then we can use a subs to set the cpsr and
skip generating a cmp.

Differential Revision: https://reviews.llvm.org/D67801

llvm-svn: 372560
2019-09-23 08:57:50 +00:00
Sam Parker 566127e376 [ARM][LowOverheadLoops] Use tBcc when reverting
Check the branch target ranges and use a tBcc instead of t2Bcc when
we can.

Differential Revision: https://reviews.llvm.org/D67796

llvm-svn: 372557
2019-09-23 08:35:31 +00:00
Petar Avramovic c063b0b0d3 [MIPS GlobalISel] VarArg argument lowering, select G_VASTART and vacopy
CC_Mips doesn't accept vararg functions for O32, so we have to explicitly
use CC_Mips_FixedArg.
For lowerCall we now properly figure out whether callee function is vararg
or not, this has no effect for O32 since we always use CC_Mips_FixedArg.
For lower formal arguments we need to copy arguments in register to stack
and save pointer to start for argument list into MipsMachineFunction
object so that G_VASTART could use it during instruction select.
For vacopy we need to copy content from one vreg to another,
load and store are used for that purpose.

Differential Revision: https://reviews.llvm.org/D67756

llvm-svn: 372555
2019-09-23 08:11:41 +00:00
Djordje Todorovic 0e490ae0a9 Reland "[utils] Implement the llvm-locstats tool"
The tool reports verbose output for the DWARF debug location coverage.
The llvm-locstats for each variable or formal parameter DIE computes what
percentage from the code section bytes, where it is in scope, it has
location description. The line 0 shows the number (and the percentage) of
DIEs with no location information, but the line 100 shows the number (and
the percentage) of DIEs where there is location information in all code
section bytes (where the variable or parameter is in the scope). The line
50..59 shows the number (and the percentage) of DIEs where the location
information is in between 50 and 59 percentage of its scope covered.

Differential Revision: https://reviews.llvm.org/D66526

llvm-svn: 372554
2019-09-23 07:57:53 +00:00
Craig Topper 03b5a13ee3 [X86] Canonicalize all zeroes vector to RHS in X86DAGToDAGISel::tryVPTESTM.
llvm-svn: 372544
2019-09-23 05:35:23 +00:00
Craig Topper 5e26064c40 [X86] Remove SETEQ/SETNE canonicalization code from LowerIntVSETCC_AVX512 to prevent an infinite loop.
The attached test case would previous infinite loop after
r365711.

I'm going to move this to X86ISelDAGToDAG.cpp to get the setcc
to match VPTEST in 32-bit mode in a follow up commit.

llvm-svn: 372543
2019-09-23 05:35:20 +00:00
Craig Topper 1f058538e0 [X86] Add 32-bit command line to avx512f-vec-test-testn.ll
llvm-svn: 372542
2019-09-23 05:35:15 +00:00
David Zarzycki a7a515cb77 Prefer AVX512 memcpy when applicable
When AVX512 is available and the preferred vector width is 512-bits or
more, we should prefer AVX512 for memcpy().

https://bugs.llvm.org/show_bug.cgi?id=43240

https://reviews.llvm.org/D67874

llvm-svn: 372540
2019-09-23 05:00:59 +00:00
Craig Topper a533e87792 [X86][SelectionDAGBuilder] Move the hack for handling MMX shift by i32 intrinsics into the X86 backend.
This intrinsics should be shift by immediate, but gcc allows any
i32 scalar and clang needs to match that. So we try to detect the
non-constant case and move the data from an integer register to an
MMX register.

Previously this was done by creating a v2i32 build_vector and
bitcast in SelectionDAGBuilder. This had to be done early since
v2i32 isn't a legal type. The bitcast+build_vector would be DAG
combined to X86ISD::MMX_MOVW2D which isel will turn into a
GPR->MMX MOVD.

This commit just moves the whole thing to lowering and emits
the X86ISD::MMX_MOVW2D directly to avoid the illegal type. The
test changes just seem to be due to nodes being linearized in a
different order.

llvm-svn: 372535
2019-09-23 01:05:33 +00:00
Roman Lebedev 7c3d6f5a1b [X86] X86DAGToDAGISel::matchBEXTRFromAndImm(): if can't use BEXTR, fallback to BZHI is profitable (PR43381)
Summary:
PR43381 notes that while we are good at matching `(X >> C1) & C2` as BEXTR/BEXTRI,
we only do that if we either have BEXTRI (TBM),
or if BEXTR is marked as being fast (`-mattr=+fast-bextr`).
In all other cases we don't match.

But that is mainly only true for AMD CPU's.
However, for all the CPU's for which we have sched models,
the BZHI is always fast (or the sched models are all bad.)

So if we decide that it's unprofitable to emit BEXTR/BEXTRI,
we should consider falling-back to BZHI if it is available,
and follow-up with the shift.

While it's really tempting to do something because it's cool
it is wise to first think whether it actually makes sense to do.
We shouldn't just use BZHI because we can, but only it it is beneficial.
In particular, it isn't really worth it if the input is a register,
mask is small, or we can fold a load.
But it is worth it if the mask does not fit into 32-bits.

(careful, i don't know much about intel cpu's, my choice of `-mcpu` may be bad here)
Thus we manage to fold a load:
https://godbolt.org/z/Er0OQz
Or if we'd end up using BZHI anyways because the mask is large:
https://godbolt.org/z/dBJ_5h
But this isn'r actually profitable in general case,
e.g. here we'd increase microop count
(the register renaming is free, mca does not model that there it seems)
https://godbolt.org/z/k6wFoz
Likewise, not worth it if we just get load folding:
https://godbolt.org/z/1M1deG

https://bugs.llvm.org/show_bug.cgi?id=43381

Reviewers: RKSimon, craig.topper, davezarzycki, spatel

Reviewed By: craig.topper, davezarzycki

Subscribers: andreadb, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67875

llvm-svn: 372532
2019-09-22 22:04:29 +00:00
Roman Lebedev 24159592ca [NFC][X86] Add BEXTR test with load and 33-bit mask (PR43381 / D67875)
llvm-svn: 372524
2019-09-22 19:36:38 +00:00
Craig Topper a1d86857ff [X86] Update commutable EVEX vcmp patterns to use timm instead of imm.
We need to match TargetConstant, not Constant. This was broken
in r372338, but we lacked test coverage.

llvm-svn: 372523
2019-09-22 19:06:13 +00:00
Craig Topper ac84771261 [X86] Add more tests for commuting evex vcmp instructions during isel to fold a load.
Some of the isel patterns were not updated to check for
TargetConstant instead of Constant in r372338.

llvm-svn: 372522
2019-09-22 19:06:08 +00:00
Simon Pilgrim 4d486156e7 [Cost][X86] Add more missing vector truncation costs
The AVX512 cases still need some work to correct recognise the PMOV truncation cases.

llvm-svn: 372514
2019-09-22 16:46:15 +00:00
Sanjay Patel eb8d39e113 [InstCombine] allow icmp+binop folds before min/max bailout (PR43310)
This has the potential to uncover missed analysis/folds as shown in the
min/max code comment/test, but fewer restrictions on icmp folds should
be better in general to solve cases like:
https://bugs.llvm.org/show_bug.cgi?id=43310

llvm-svn: 372510
2019-09-22 14:31:53 +00:00
Sanjay Patel d2a524288d [InstCombine] add tests for icmp fold hindered by min/max; NFC
llvm-svn: 372509
2019-09-22 14:23:22 +00:00
Simon Pilgrim 665ccbff60 [Cost][X86] Add v2i64 truncation costs
We are missing costs for a lot of truncation cases, I'm hoping to address all the 'zero cost' cases in trunc.ll

I thought this was a vector widening side effect, but even before this we had some interesting LV decisions (notably over indvars) being made due to these zero costs.

llvm-svn: 372498
2019-09-22 12:04:38 +00:00
Craig Topper 38014c553f [X86] Add test memset and memcpy testcases for D67874. NFC
llvm-svn: 372494
2019-09-22 06:52:25 +00:00
Roman Lebedev baf809811b [InstSimplify] simplifyUnsignedRangeCheck(): X >= Y && Y == 0 --> Y == 0
https://rise4fun.com/Alive/v9Y4

llvm-svn: 372491
2019-09-21 22:27:39 +00:00
Roman Lebedev ac4dda8052 [NFC][InstSimplify] Add exhaustive test coverage for simplifyUnsignedRangeCheck().
One case is not handled.

llvm-svn: 372489
2019-09-21 22:27:18 +00:00
Suyog Sarda cd629ea0a8 SROA: Check Total Bits of vector type
While Promoting alloca instruction of Vector Type, 
Check total size in bits of its slices too.
If they don't match, don't promote the alloca instruction.

Bug : https://bugs.llvm.org/show_bug.cgi?id=42585

llvm-svn: 372480
2019-09-21 18:16:37 +00:00
Wei Mi eee532cd5f Recommit [SampleFDO] Expose an interface to return the size of a section
or the size of the profile for profile in ExtBinary format.

Fix a test failure on Mac.

[SampleFDO] Expose an interface to return the size of a section or the
size of the profile for profile in ExtBinary format.

Sometimes we want to limit the size of the profile by stripping some functions
with low sample count or by stripping some function names with small text size
from profile symbol list. That requires the profile reader to have the
interfaces returning the size of a section or the size of total profile. The
patch add those interfaces.

At the same time, add some dump facility to show the size of each section.

Differential revision: https://reviews.llvm.org/D67726

llvm-svn: 372478
2019-09-21 17:23:55 +00:00
Hideto Ueno 63f6066b53 [Attributor] Implement "norecurse" function attribute deduction
Summary:
This patch introduces `norecurse` function attribute deduction.

`norecurse` will be deduced if the following conditions hold:
* The size of SCC in which the function belongs equals to 1.
* The function doesn't have self-recursion.
* We have `norecurse` for all call site.

To avoid a large change, SCC is calculated using scc_iterator in InfoCache initialization for now.

Reviewers: jdoerfert, sstefan1

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67751

llvm-svn: 372475
2019-09-21 15:13:19 +00:00
Roman Lebedev 854b0f0f00 [NFC][X86] Adjust check prefixes in bmi.ll (PR43381)
llvm-svn: 372468
2019-09-21 11:12:55 +00:00
Amara Emerson 9c7d599dec [AArch64][GlobalISel] Implement selection for G_SHL of <2 x i64>
Simple continuation of existing selection support.

llvm-svn: 372467
2019-09-21 09:21:16 +00:00
Amara Emerson a59a886832 [AArch64][GlobalISel] Selection support for G_ASHR of <2 x s64>
Just add an extra case to the existing selection logic.

llvm-svn: 372466
2019-09-21 09:21:13 +00:00
Amara Emerson fae979bc68 [AArch64][GlobalISel] Make <4 x s32> G_ASHR and G_LSHR legal.
llvm-svn: 372465
2019-09-21 09:21:10 +00:00
Amara Emerson 3bb56fa478 Revert "[SampleFDO] Expose an interface to return the size of a section or the size"
This reverts commit f118852046.

Broke the macOS build/greendragon bots.

llvm-svn: 372464
2019-09-21 09:11:51 +00:00
James Molloy 8a74eca398 [MachinePipeliner] Improve the TargetInstrInfo API analyzeLoop/reduceLoopCount
Recommit: fix asan errors.

The way MachinePipeliner uses these target hooks is stateful - we reduce trip
count by one per call to reduceLoopCount. It's a little overfit for hardware
loops, where we don't have to worry about stitching a loop induction variable
across prologs and epilogs (the induction variable is implicit).

This patch introduces a new API:

  /// Analyze loop L, which must be a single-basic-block loop, and if the
  /// conditions can be understood enough produce a PipelinerLoopInfo object.
  virtual std::unique_ptr<PipelinerLoopInfo>
  analyzeLoopForPipelining(MachineBasicBlock *LoopBB) const;

The return value is expected to be an implementation of the abstract class:

  /// Object returned by analyzeLoopForPipelining. Allows software pipelining
  /// implementations to query attributes of the loop being pipelined.
  class PipelinerLoopInfo {
  public:
    virtual ~PipelinerLoopInfo();
    /// Return true if the given instruction should not be pipelined and should
    /// be ignored. An example could be a loop comparison, or induction variable
    /// update with no users being pipelined.
    virtual bool shouldIgnoreForPipelining(const MachineInstr *MI) const = 0;

    /// Create a condition to determine if the trip count of the loop is greater
    /// than TC.
    ///
    /// If the trip count is statically known to be greater than TC, return
    /// true. If the trip count is statically known to be not greater than TC,
    /// return false. Otherwise return nullopt and fill out Cond with the test
    /// condition.
    virtual Optional<bool>
    createTripCountGreaterCondition(int TC, MachineBasicBlock &MBB,
                                 SmallVectorImpl<MachineOperand> &Cond) = 0;

    /// Modify the loop such that the trip count is
    /// OriginalTC + TripCountAdjust.
    virtual void adjustTripCount(int TripCountAdjust) = 0;

    /// Called when the loop's preheader has been modified to NewPreheader.
    virtual void setPreheader(MachineBasicBlock *NewPreheader) = 0;

    /// Called when the loop is being removed.
    virtual void disposed() = 0;
  };

The Pipeliner (ModuloSchedule.cpp) can use this object to modify the loop while
allowing the target to hold its own state across all calls. This API, in
particular the disjunction of creating a trip count check condition and
adjusting the loop, improves the code quality in ModuloSchedule.cpp.

llvm-svn: 372463
2019-09-21 08:19:41 +00:00
Craig Topper 04682939eb [X86] Use sse_load_f32/f64 and timm in patterns for memory form of vgetmantss/sd.
Previously we only matched scalar_to_vector and scalar load, but
we should be able to narrow a vector load or match vzload.

Also need to match TargetConstant instead of Constant. The register
patterns were previously updated, but not the memory patterns.

llvm-svn: 372458
2019-09-21 06:44:29 +00:00
Craig Topper 4fa12ac92c [X86] Add test case to show failure to fold load with getmantss due to isel pattern looking for Constant instead of TargetConstant
The intrinsic has an immarg so its gets created with a TargetConstant
instead of a Constant after r372338. The isel pattern was only
updated for the register form, but not the memory form.

llvm-svn: 372457
2019-09-21 06:44:24 +00:00
Matt Arsenault eb6eb694e4 AMDGPU/GlobalISel: Allow selection of scalar min/max
I believe all of the uniform/divergent pattern predicates are
redundant and can be removed. The uniformity bit already influences
the register class, and nothhing has broken when I've removed this and
others.

llvm-svn: 372450
2019-09-21 02:37:33 +00:00
Amara Emerson 7ac1039957 [GlobalISel] Defer setting HasCalls on MachineFrameInfo to selection time.
We currently always set the HasCalls on MFI during translation and legalization if
we're handling a call or legalizing to a libcall. However, if that call is later
optimized to a tail call then we don't need the flag. The flag being set to true
causes frame lowering to always save and restore FP/LR, which adds unnecessary code.

This change does the same thing as SelectionDAG and ports over some code that scans
instructions after selection, using TargetInstrInfo to determine if target opcodes
are known calls.

Code size geomean improvements on CTMark:
 -O0 : 0.1%
 -Os : 0.3%

Differential Revision: https://reviews.llvm.org/D67868

llvm-svn: 372443
2019-09-20 23:52:07 +00:00
Teresa Johnson 2f32e5d84d [Inliner] Remove incorrect early exit during switch cost computation
Summary:
The CallAnalyzer::visitSwitchInst has an early exit when the estimated
lower bound of the switch cost will put the overall cost of the inline
above the threshold. However, this code is not correctly estimating the
lower bound for switches that can be transformed into bit tests, leading
to unnecessary lost inlines, and also differing behavior with
optimization remarks enabled.

First, the early exit is controlled by whether ComputeFullInlineCost is
enabled or not, and that in turn is disabled by default but enabled when
enabling -pass-remarks=missed. This by itself wouldn't lead to a
problem, except that as described below, the lower bound can be above
the real lower bound, so we can sometimes get different inline decisions
with inline remarks enabled, which is problematic.

The early exit was added in along with a new switch cost model in D31085.
The reason why this early exit was added is due to a concern one reviewer
raised about compile time for large switches:
https://reviews.llvm.org/D31085?id=94559#inline-276200

However, the code just below there calls
getEstimatedNumberOfCaseClusters, which in turn immediately calls
BasicTTIImpl getEstimatedNumberOfCaseClusters, which in the worst case
does a linear scan of the cases to get the high and low values. The
bit test handling in particular is guarded by whether the number of
cases fits into the max bit width. There is no suggestion that anyone
measured a compile time issue, it appears to be theoretical.

The problem is that the reviewer's comment about the lower bound
calculation is incorrect, specifically in the case of a switch that can
be lowered to a bit test. This isn't followed up on the comment
thread, but the author does add a FIXME to that effect above the early
exit added when they subsequently revised the patch.

As a result, we were incorrectly early exiting and not inlining
functions with switch statements that would be lowered to bit tests in
cases where we were nearing the threshold. Combined with the fact that
this early exit was skipped with opt remarks enabled, this caused
different inlining decisions to be made when -pass-remarks=missed is
enabled to debug the missing inline.

Remove the early exit for the above reasons.

I also copied over an existing AArch64 inlining test to X86, and
adjusted the threshold so that the bit test inline only occurs with the
fix in this patch.

Reviewers: davidxl

Subscribers: eraman, kristof.beyls, haicheng, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67716

llvm-svn: 372440
2019-09-20 23:29:17 +00:00
Wei Mi f118852046 [SampleFDO] Expose an interface to return the size of a section or the size
of the profile for profile in ExtBinary format.

Sometimes we want to limit the size of the profile by stripping some functions
with low sample count or by stripping some function names with small text size
from profile symbol list. That requires the profile reader to have the
interfaces returning the size of a section or the size of total profile. The
patch add those interfaces.

At the same time, add some dump facility to show the size of each section.

llvm-svn: 372439
2019-09-20 23:24:50 +00:00
Ulrich Weigand 819c1651f7 [SystemZ] Support z15 processor name
The recently announced IBM z15 processor implements the architecture
already supported as "arch13" in LLVM.  This patch adds support for
"z15" as an alternate architecture name for arch13.

The patch also uses z15 in a number of places where we used arch13
as long as the official name was not yet announced.

llvm-svn: 372435
2019-09-20 23:04:45 +00:00
Sterling Augustine 4a58936716 Fix missed case of switching getConstant to getTargetConstant. Try 2.
Summary: This fixes a crasher introduced by r372338.

Reviewers: echristo, arsenm

Subscribers: wdng, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67850

llvm-svn: 372434
2019-09-20 22:26:55 +00:00
Jinsong Ji 216be996d6 [NFC][PowerPC] Consolidate testing of common linkage symbols
Add a new file to test the code gen for common linkage symbol.
Remove common linkage in some other testcases to avoid distraction.

llvm-svn: 372426
2019-09-20 20:31:37 +00:00
Mitch Phillips 72a3d8597d Revert "[MachinePipeliner] Improve the TargetInstrInfo API analyzeLoop/reduceLoopCount"
This commit broke the ASan buildbot. See comments in rL372376 for more
information.

This reverts commit 15e27b0b6d.

llvm-svn: 372425
2019-09-20 20:25:16 +00:00
Craig Topper c139d1e281 [Mips] Remove immarg test for intrinsics that no longer have an immarg after r372409.
llvm-svn: 372420
2019-09-20 18:52:49 +00:00
Roman Lebedev 081eebc58f [NFC][InstCombine] Fixup newly-added tests
llvm-svn: 372413
2019-09-20 17:43:46 +00:00
Evgeniy Stepanov c2bda3e422 [MTE] Handle MTE instructions in AArch64LoadStoreOptimizer.
Summary: Generate pre- and post-indexed forms of ST*G and STGP when possible.

Reviewers: ostannard, vitalybuka

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67741

llvm-svn: 372412
2019-09-20 17:36:27 +00:00
Sebastian Pop f6398fb72c [aarch64] add def-pats for dot product
This patch adds the patterns to select the dot product instructions.
Tested on aarch64-linux with make check-all.

Differential Revision: https://reviews.llvm.org/D67645

llvm-svn: 372408
2019-09-20 16:33:33 +00:00
Stanislav Mekhanoshin af77ca7e6e Remove assert from MachineLoop::getLoopPredecessor()
According to the documentation method returns predecessor
if the given loop's header has exactly one unique predecessor
outside the loop. Otherwise return null.

In reality it asserts if there is no predecessor outside of
the loop.

The testcase has the loop where predecessors outside of the
loop were not identified as analyzeBranch() was unable to
process the mask branch and returned true. That is also not
correct to assert for the truly dead loops.

Differential Revision: https://reviews.llvm.org/D67634

llvm-svn: 372405
2019-09-20 15:26:10 +00:00
Krzysztof Parzyszek 2b5d7e93dd [MVT] Add v256i1 to MachineValueType
This type can show up when lowering some HVX vector code on Hexagon.

llvm-svn: 372403
2019-09-20 15:19:20 +00:00
Roman Lebedev d21087af95 [InstCombine] Tests for (a+b)<=a && (a+b)!=0 fold (PR43259)
https://rise4fun.com/Alive/knp
https://rise4fun.com/Alive/ALap

llvm-svn: 372402
2019-09-20 15:06:47 +00:00
Oliver Cruickshank c84722ff27 [ARM] Fix CTTZ not generating correct instructions MVE
CTTZ intrinsic should have been set to Custom, not Expand

llvm-svn: 372401
2019-09-20 15:03:44 +00:00
David Stenberg b71d8d465a Add a missing space in a MIR parser error message
llvm-svn: 372398
2019-09-20 14:41:41 +00:00
Sanjay Patel 4896f7243d [SLPVectorizer] add tests for bogus reductions; NFC
https://bugs.llvm.org/show_bug.cgi?id=42708
https://bugs.llvm.org/show_bug.cgi?id=43146

llvm-svn: 372393
2019-09-20 14:17:00 +00:00
David Zarzycki 4fff87d2ee [Testing] Python 3 requires `print` to use parens
llvm-svn: 372392
2019-09-20 13:52:47 +00:00
David Tellenbach 2a47c77e72 [FastISel] Fix insertion of unconditional branches during FastISel
The insertion of an unconditional branch during FastISel can differ depending on
building with or without debug information. This happens because FastISel::fastEmitBranch
emits an unconditional branch depending on the size of the current basic block
without distinguishing between debug and non-debug instructions.

This patch fixes this issue by ignoring debug instructions when getting the size
of the basic block.

Reviewers: aprantl

Reviewed By: aprantl

Subscribers: ormris, aprantl, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67703

llvm-svn: 372389
2019-09-20 13:22:59 +00:00
Nico Weber 03475adcf7 Revert r372366 "Use getTargetConstant for BLENDI, and add a test to catch it."
This reverts commit 52621307bc.

Tests have been failing all night with

    [0/2] ACTION //llvm/test:check-llvm(//llvm/utils/gn/build/toolchain:unix)
    -- Testing: 33647 tests, 64 threads --
    Testing: 0 .. 10..
    UNRESOLVED: LLVM :: CodeGen/AMDGPU/GlobalISel/isel-blendi-gettargetconstant.ll (6943 of 33647)
    ******************** TEST 'LLVM :: CodeGen/AMDGPU/GlobalISel/isel-blendi-gettargetconstant.ll' FAILED ********************
    Test has no run line!
    ********************

Since there were other concerns on https://reviews.llvm.org/D67785,
I'm just reverting for now.

llvm-svn: 372383
2019-09-20 12:05:29 +00:00
George Rimar 4d69967f44 [yaml2obj/obj2yaml] - Do not trigger llvm_unreachable when dumping/parsing relocations and e_machine is unsupported.
Currently when e_machine is set to something that is not supported by YAML lib,
then tools fail with llvm_unreachable.

In this patch I allow them to handle relocations in this case.
It can be used to dump and create objects for broken or unsupported targets.

Differential revision: https://reviews.llvm.org/D67657

llvm-svn: 372377
2019-09-20 09:15:36 +00:00
James Molloy 15e27b0b6d [MachinePipeliner] Improve the TargetInstrInfo API analyzeLoop/reduceLoopCount
The way MachinePipeliner uses these target hooks is stateful - we reduce trip
count by one per call to reduceLoopCount. It's a little overfit for hardware
loops, where we don't have to worry about stitching a loop induction variable
across prologs and epilogs (the induction variable is implicit).

This patch introduces a new API:

  /// Analyze loop L, which must be a single-basic-block loop, and if the
  /// conditions can be understood enough produce a PipelinerLoopInfo object.
  virtual std::unique_ptr<PipelinerLoopInfo>
  analyzeLoopForPipelining(MachineBasicBlock *LoopBB) const;

The return value is expected to be an implementation of the abstract class:

  /// Object returned by analyzeLoopForPipelining. Allows software pipelining
  /// implementations to query attributes of the loop being pipelined.
  class PipelinerLoopInfo {
  public:
    virtual ~PipelinerLoopInfo();
    /// Return true if the given instruction should not be pipelined and should
    /// be ignored. An example could be a loop comparison, or induction variable
    /// update with no users being pipelined.
    virtual bool shouldIgnoreForPipelining(const MachineInstr *MI) const = 0;

    /// Create a condition to determine if the trip count of the loop is greater
    /// than TC.
    ///
    /// If the trip count is statically known to be greater than TC, return
    /// true. If the trip count is statically known to be not greater than TC,
    /// return false. Otherwise return nullopt and fill out Cond with the test
    /// condition.
    virtual Optional<bool>
    createTripCountGreaterCondition(int TC, MachineBasicBlock &MBB,
                                 SmallVectorImpl<MachineOperand> &Cond) = 0;

    /// Modify the loop such that the trip count is
    /// OriginalTC + TripCountAdjust.
    virtual void adjustTripCount(int TripCountAdjust) = 0;

    /// Called when the loop's preheader has been modified to NewPreheader.
    virtual void setPreheader(MachineBasicBlock *NewPreheader) = 0;

    /// Called when the loop is being removed.
    virtual void disposed() = 0;
  };

The Pipeliner (ModuloSchedule.cpp) can use this object to modify the loop while
allowing the target to hold its own state across all calls. This API, in
particular the disjunction of creating a trip count check condition and
adjusting the loop, improves the code quality in ModuloSchedule.cpp.

llvm-svn: 372376
2019-09-20 08:57:46 +00:00
Owen Reynolds 25040f8dec Reapply [llvm-ar] Include a line number when failing to parse an MRI script
Reapply r372309

Errors that occur when reading an MRI script now include a corresponding
line number.

Differential Revision: https://reviews.llvm.org/D67449

llvm-svn: 372374
2019-09-20 08:10:14 +00:00
Craig Topper a34f13f2ba [X86] Use timm in MMX pinsrw/pextrw isel patterns. Add missing test cases.
This fixes an isel failure after r372338.

llvm-svn: 372371
2019-09-20 06:00:35 +00:00
Fangrui Song c768ad94b7 [llvm-ar] Removes repetition in the error message
As per bug 40244, fixed an error where the error message was repeated.

Differential Revision: https://reviews.llvm.org/D67038
Patch by Yu Jian (wyjw)

llvm-svn: 372370
2019-09-20 04:40:44 +00:00
Sterling Augustine 52621307bc Use getTargetConstant for BLENDI, and add a test to catch it.
Summary: This fixes a crasher introduced by r372338.

Reviewers: echristo, arsenm

Subscribers: jvesely, wdng, nhaehnle, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67785

Tighten up the test case.

llvm-svn: 372366
2019-09-20 02:29:16 +00:00
Matt Arsenault dd74f4839b MachineScheduler: Fix missing dependency with multiple subreg defs
If an instruction had multiple subregister defs, and one of them was
undef, this would improperly conclude all other lanes are
killed. There could still be other defs of those read-undef lanes in
other operands. This would improperly remove register uses from
CurrentVRegUses, so the visitation of later operands would not find
the necessary register dependency. This would also mean this would
fail or not depending on how different subregister def operands were
ordered.

On an undef subregister def, scan the instruction for other
subregister defs and avoid killing those.

This possibly should be deferring removing anything from
CurrentVRegUses until the entire instruction has been processed
instead.

llvm-svn: 372362
2019-09-20 00:09:15 +00:00
Akira Hatanaka 75fbb171c3 [ObjC][ARC] Skip debug instructions when computing the insert point of
objc_release calls

This fixes a bug where the presence of debug instructions would cause
ARC optimizer to change the order of retain and release calls.

rdar://problem/55319419

llvm-svn: 372352
2019-09-19 20:58:51 +00:00
Jakub Kuderski e6b2164723 Don't use invalidated iterators in FlattenCFGPass
Summary:
FlattenCFG may erase unnecessary blocks, which also invalidates iterators to those erased blocks.
Before this patch, `iterativelyFlattenCFG` could try to increment a BB iterator after that BB has been removed and crash.

This patch makes FlattenCFGPass use `WeakVH` to skip over erased blocks.

Reviewers: dblaikie, tstellar, davide, sanjoy, asbirlea, grosser

Reviewed By: asbirlea

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67672

llvm-svn: 372347
2019-09-19 19:39:42 +00:00
Shoaib Meenai d89f2d872d [Analysis] Allow -scalar-evolution-max-iterations more than once
At present, `-scalar-evolution-max-iterations` is a `cl::Optional`
option, which means it demands to be passed exactly zero or one times.
Our build system makes it pretty tricky to guarantee this. We often
accidentally pass the flag more than once (but always with the same
value) which results in an error, after which compilation fails:

```
clang (LLVM option parsing): for the -scalar-evolution-max-iterations option: may only occur zero or one times!
```

It seems reasonable to allow -scalar-evolution-max-iterations to be
passed more than once. Quoting the [[ http://llvm.org/docs/CommandLine.html#controlling-the-number-of-occurrences-required-and-allowed | documentation ]]:

> The cl::ZeroOrMore modifier ... indicates that your program will allow the option to be specified zero or more times.
> ...
> If an option is specified multiple times for an option of the cl::opt class, only the last value will be retained.

Original patch by: Enrico Bern Hardy Tanuwidjaja <etanuwid@fb.com>

Differential Revision: https://reviews.llvm.org/D67512

llvm-svn: 372346
2019-09-19 18:21:32 +00:00
Jinsong Ji ca4c5deae5 [NFC][PowerPC] Fast-isel VSX support test
We have fixed most of the VSX limitation in Fast-isel,
so we can remove the -mattr=-vsx for most testcases now.

llvm-svn: 372345
2019-09-19 18:18:18 +00:00
Roman Lebedev 7a67ed5795 [InstCombine] Simplify @llvm.usub.with.overflow+non-zero check (PR43251)
Summary:
This is again motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens `-fsanitize=pointer-overflow`
overhead from 25% to 50%, which strongly implies missing folds.

In this particular case, given
```
char* test(char& base, unsigned long offset) {
  return &base - offset;
}
```
it will end up producing something like
https://godbolt.org/z/luGEju
which after optimizations reduces down to roughly
```
declare void @use64(i64)
define i1 @test(i8* dereferenceable(1) %base, i64 %offset) {
  %base_int = ptrtoint i8* %base to i64
  %adjusted = sub i64 %base_int, %offset
  call void @use64(i64 %adjusted)
  %not_null = icmp ne i64 %adjusted, 0
  %no_underflow = icmp ule i64 %adjusted, %base_int
  %no_underflow_and_not_null = and i1 %not_null, %no_underflow
  ret i1 %no_underflow_and_not_null
}
```
Without D67122 there was no `%not_null`,
and in this particular case we can "get rid of it", by merging two checks:
Here we are checking: `Base u>= Offset && (Base u- Offset) != 0`, but that is simply `Base u> Offset`

Alive proofs:
https://rise4fun.com/Alive/QOs

The `@llvm.usub.with.overflow` pattern itself is not handled here
because this is the main pattern, that we currently consider canonical.

https://bugs.llvm.org/show_bug.cgi?id=43251

Reviewers: spatel, nikic, xbolva00, majnemer

Reviewed By: xbolva00, majnemer

Subscribers: vsk, majnemer, xbolva00, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67356

llvm-svn: 372341
2019-09-19 17:25:19 +00:00
Alexander Timofeev e2f9bc3b11 [AMDGPU] Unnecessary -amdgpu-scalarize-global-loads=false flag removed from min/max lit tests.
Reviewers: arsenm

Differential Revision: https://reviews.llvm.org/D67712

llvm-svn: 372340
2019-09-19 16:44:38 +00:00
Sanjay Patel 13e71ce693 [Float2Int] avoid crashing on unreachable code (PR38502)
In the example from:
https://bugs.llvm.org/show_bug.cgi?id=38502
...we hit infinite looping/crashing because we have non-standard IR -
an instruction operand is used before defined.
This and other unusual constructs are allowed in unreachable blocks,
so avoid the problem by using DominatorTree to step around landmines.

Differential Revision: https://reviews.llvm.org/D67766

llvm-svn: 372339
2019-09-19 16:31:17 +00:00
Matt Arsenault 3ecab8e455 Reapply r372285 "GlobalISel: Don't materialize immarg arguments to intrinsics"
This reverts r372314, reapplying r372285 and the commits which depend
on it (r372286-r372293, and r372296-r372297)

This was missing one switch to getTargetConstant in an untested case.

llvm-svn: 372338
2019-09-19 16:26:14 +00:00
Andrea Di Biagio e0900f285b [MCA] Improved cost computation for loop carried dependencies in the bottleneck analysis.
This patch introduces a cut-off threshold for dependency edge frequences with
the goal of simplifying the critical sequence computation.  This patch also
removes the cost normalization for loop carried dependencies.  We didn't really
need to artificially amplify the cost of loop-carried dependencies since it is
already computed as the integral over time of the delay (in cycle).

In the absence of backend stalls there is no need for computing a critical
sequence. With this patch we early exit from the critical sequence computation
if no bottleneck was reported during the simulation.

llvm-svn: 372337
2019-09-19 16:05:11 +00:00
Matt Arsenault 7decdbf2db X86: Add missing test for vshli SimplifyDemandedBitsForTargetNode
This would have caught this regression which triggered the revert of
r372285: https://bugs.chromium.org/p/chromium/issues/detail?id=1005750

llvm-svn: 372335
2019-09-19 15:44:00 +00:00
Simon Pilgrim af6043557d [DAG][X86] Convert isNegatibleForFree/GetNegatedExpression to a target hook (PR42863)
This patch converts the DAGCombine isNegatibleForFree/GetNegatedExpression into overridable TLI hooks and includes a demonstration X86 implementation.

The intention is to let us extend existing FNEG combines to work more generally with negatible float ops, allowing it work with target specific combines and opcodes (e.g. X86's FMA variants).

Unlike the SimplifyDemandedBits, we can't just handle target nodes through a Target callback, we need to do this as an override to allow targets to handle generic opcodes as well. This does mean that the target implementations has to duplicate some checks (recursion depth etc.).

I've only begun to replace X86's FNEG handling here, handling FMADDSUB/FMSUBADD negation and some low impact codegen changes (some FMA negatation propagation). We can build on this in future patches.

Differential Revision: https://reviews.llvm.org/D67557

llvm-svn: 372333
2019-09-19 15:02:47 +00:00
Sanjay Patel 7592e3a81f [Float2Int] auto-generate complete test checks; NFC
llvm-svn: 372324
2019-09-19 13:58:15 +00:00
James Molloy 88a5fbfcea [TableGen] Support encoding per-HwMode
Much like ValueTypeByHwMode/RegInfoByHwMode, this patch allows targets
to modify an instruction's encoding based on HwMode. When the
EncodingInfos field is non-empty the Inst and Size fields of the Instruction
are ignored and taken from EncodingInfos instead.

As part of this promote getHwMode() from TargetSubtargetInfo to MCSubtargetInfo.

This is NFC for all existing targets - new code is generated only if targets
use EncodingByHwMode.

llvm-svn: 372320
2019-09-19 13:39:54 +00:00
Hans Wennborg 13bdae8541 Revert r372285 "GlobalISel: Don't materialize immarg arguments to intrinsics"
This broke the Chromium build, causing it to fail with e.g.

  fatal error: error in backend: Cannot select: t362: v4i32 = X86ISD::VSHLI t392, Constant:i8<15>

See llvm-commits thread of r372285 for details.

This also reverts r372286, r372287, r372288, r372289, r372290, r372291,
r372292, r372293, r372296, and r372297, which seemed to depend on the
main commit.

> Encode them directly as an imm argument to G_INTRINSIC*.
>
> Since now intrinsics can now define what parameters are required to be
> immediates, avoid using registers for them. Intrinsics could
> potentially want a constant that isn't a legal register type. Also,
> since G_CONSTANT is subject to CSE and legalization, transforms could
> potentially obscure the value (and create extra work for the
> selector). The register bank of a G_CONSTANT is also meaningful, so
> this could throw off future folding and legalization logic for AMDGPU.
>
> This will be much more convenient to work with than needing to call
> getConstantVRegVal and checking if it may have failed for every
> constant intrinsic parameter. AMDGPU has quite a lot of intrinsics wth
> immarg operands, many of which need inspection during lowering. Having
> to find the value in a register is going to add a lot of boilerplate
> and waste compile time.
>
> SelectionDAG has always provided TargetConstant for constants which
> should not be legalized or materialized in a register. The distinction
> between Constant and TargetConstant was somewhat fuzzy, and there was
> no automatic way to force usage of TargetConstant for certain
> intrinsic parameters. They were both ultimately ConstantSDNode, and it
> was inconsistently used. It was quite easy to mis-select an
> instruction requiring an immediate. For SelectionDAG, start emitting
> TargetConstant for these arguments, and using timm to match them.
>
> Most of the work here is to cleanup target handling of constants. Some
> targets process intrinsics through intermediate custom nodes, which
> need to preserve TargetConstant usage to match the intrinsic
> expectation. Pattern inputs now need to distinguish whether a constant
> is merely compatible with an operand or whether it is mandatory.
>
> The GlobalISelEmitter needs to treat timm as a special case of a leaf
> node, simlar to MachineBasicBlock operands. This should also enable
> handling of patterns for some G_* instructions with immediates, like
> G_FENCE or G_EXTRACT.
>
> This does include a workaround for a crash in GlobalISelEmitter when
> ARM tries to uses "imm" in an output with a "timm" pattern source.

llvm-svn: 372314
2019-09-19 12:33:07 +00:00
David Green 0cfb78e52a [ARM] MVE i1 splat
We needn't BFI each lane individually into a predicate register when each lane
in the same. A simple sign extend and a vmsr will do.

Differential Revision: https://reviews.llvm.org/D67653

llvm-svn: 372313
2019-09-19 12:17:41 +00:00
Owen Reynolds aa03c14827 Revert [llvm-ar] Include a line number when failing to parse an MRI script
Revert r372309 due to buildbot failures

Differential Revision: https://reviews.llvm.org/D67449

llvm-svn: 372311
2019-09-19 11:22:59 +00:00
Owen Reynolds 04398c729b [llvm-ar] Include a line number when failing to parse an MRI script
Errors that occur when reading an MRI script now include a corresponding
line number.

Differential Revision: https://reviews.llvm.org/D67449

llvm-svn: 372309
2019-09-19 10:51:43 +00:00
Serguei Katkov a44768858c [Unroll] Add an option to control complete unrolling
Add an ability to specify the max full unroll count for LoopUnrollPass pass
in pass options.

Reviewers: fhahn, fedor.sergeev
Reviewed By: fedor.sergeev
Subscribers: hiraditya, zzheng, dmgreen, llvm-commits
Differential Revision: https://reviews.llvm.org/D67701

llvm-svn: 372305
2019-09-19 06:57:29 +00:00
Craig Topper c2d25ed1b3 [X86] Prevent crash in LowerBUILD_VECTORvXi1 for v64i1 vectors on 32-bit targets when the vector is a mix of constants and non-constant.
We need to materialize the constants as two 32-bit values that
are casted to v32i1 and then concatenated.

llvm-svn: 372304
2019-09-19 06:50:39 +00:00
Sam Parker 56aa691c41 [ARM] Fix for buildbots
I had missed that massive.mir also needed updating.

llvm-svn: 372303
2019-09-19 06:50:19 +00:00
Matt Arsenault bffbeecb44 AMDGPU/GlobalISel: RegBankSelect llvm.amdgcn.ds.swizzle
llvm-svn: 372297
2019-09-19 04:11:17 +00:00
Matt Arsenault 494243597b AMDGPU/GlobalISel: Select llvm.amdgcn.raw.buffer.store.format
This needs special handling due to some subtargets that have a
nonstandard register layout for f16 vectors

Also reject some illegal types on other targets.

llvm-svn: 372293
2019-09-19 02:35:08 +00:00
Matt Arsenault 67f1f6ff8c AMDGPU/GlobalISel: Select llvm.amdgcn.raw.buffer.store
llvm-svn: 372292
2019-09-19 02:30:27 +00:00
Matt Arsenault 838ff36553 AMDGPU/GlobalISel: RegBankSelect struct buffer load/store
llvm-svn: 372291
2019-09-19 02:26:53 +00:00
Matt Arsenault a62ef58346 AMDGPU/GlobalISel: RegBankSelect llvm.amdgcn.raw.buffer.{load|store}
llvm-svn: 372290
2019-09-19 02:25:09 +00:00
Matt Arsenault a30d022db6 AMDGPU/GlobalISel: Attempt to RegBankSelect image intrinsics
Images should always have 2 consecutive, mandatory SGPR arguments.

llvm-svn: 372289
2019-09-19 02:23:06 +00:00
Matt Arsenault 01213407c4 Fix typo
llvm-svn: 372288
2019-09-19 02:15:29 +00:00
Matt Arsenault c189f023ac MachineScheduler: Fix assert from not checking subregs
The assert would fail if there was a dead def of a subregister if
there was a previous use of a different subregister.

llvm-svn: 372287
2019-09-19 02:14:12 +00:00
Matt Arsenault 22e2c09515 AMDGPU/GlobalISel: Fix RegBankSelect G_SMULH/G_UMULH pre-gfx9
The scalar versions were only introduced in gfx9.

llvm-svn: 372286
2019-09-19 01:42:34 +00:00
Matt Arsenault d8399d12cd GlobalISel: Don't materialize immarg arguments to intrinsics
Encode them directly as an imm argument to G_INTRINSIC*.

Since now intrinsics can now define what parameters are required to be
immediates, avoid using registers for them. Intrinsics could
potentially want a constant that isn't a legal register type. Also,
since G_CONSTANT is subject to CSE and legalization, transforms could
potentially obscure the value (and create extra work for the
selector). The register bank of a G_CONSTANT is also meaningful, so
this could throw off future folding and legalization logic for AMDGPU.

This will be much more convenient to work with than needing to call
getConstantVRegVal and checking if it may have failed for every
constant intrinsic parameter. AMDGPU has quite a lot of intrinsics wth
immarg operands, many of which need inspection during lowering. Having
to find the value in a register is going to add a lot of boilerplate
and waste compile time.

SelectionDAG has always provided TargetConstant for constants which
should not be legalized or materialized in a register. The distinction
between Constant and TargetConstant was somewhat fuzzy, and there was
no automatic way to force usage of TargetConstant for certain
intrinsic parameters. They were both ultimately ConstantSDNode, and it
was inconsistently used. It was quite easy to mis-select an
instruction requiring an immediate. For SelectionDAG, start emitting
TargetConstant for these arguments, and using timm to match them.

Most of the work here is to cleanup target handling of constants. Some
targets process intrinsics through intermediate custom nodes, which
need to preserve TargetConstant usage to match the intrinsic
expectation. Pattern inputs now need to distinguish whether a constant
is merely compatible with an operand or whether it is mandatory.

The GlobalISelEmitter needs to treat timm as a special case of a leaf
node, simlar to MachineBasicBlock operands. This should also enable
handling of patterns for some G_* instructions with immediates, like
G_FENCE or G_EXTRACT.

This does include a workaround for a crash in GlobalISelEmitter when
ARM tries to uses "imm" in an output with a "timm" pattern source.

llvm-svn: 372285
2019-09-19 01:33:14 +00:00
David Blaikie 798fe477e3 llvm-reduce: Add pass to reduce instructions
Patch by Diego Treviño!

Differential Revision: https://reviews.llvm.org/D66263

llvm-svn: 372282
2019-09-19 00:59:27 +00:00
Thomas Lively dbcd7f5602 [WebAssembly] Restore defaults for stores per memop
Summary:
Large slowdowns were observed in Rust due to many small, constant
sized copies in conjunction with poorly-optimized memory.copy
implementations. Since memory.copy cannot be expected to be inlined
efficiently by engines at this time, stop using it for the smallest
copies. We continue to lower all memcpy intrinsics to memory.copy,
though.

Reviewers: aheejin, alexcrichton

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, JDevlieghere, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67639

llvm-svn: 372275
2019-09-18 23:18:16 +00:00
Jessica Paquette ce65ebc39e [AArch64][GlobalISel] Support lowering musttail calls
Since we now lower most tail calls, it makes sense to support musttail.

Instead of always falling back to SelectionDAG, only fall back when a musttail
call was not able to be emitted as a tail call. Once we can handle most
incoming and outgoing arguments, we can change this to a `report_fatal_error`
like in ISelLowering.

Remove the assert that we don't have varargs and a musttail, and replace it
with a return false. Implementing this requires that we implement
`saveVarArgRegisters` from AArch64ISelLowering, which is an entirely different
patch.

Add GlobalISel lines to vararg-tallcall.ll to make sure that we produce correct
code. Right now we only fall back, but eventually this will be relevant.

Differential Revision: https://reviews.llvm.org/D67681

llvm-svn: 372273
2019-09-18 22:42:25 +00:00
Adrian Prantl 0779dffbd4 Remove the obsolete BlockByRefStruct flag from LLVM IR
DIFlagBlockByRefStruct is an unused DIFlag that originally was used by
clang to express (Objective-)C block captures in debug info. For the
last year Clang has been emitting complex DIExpressions to describe
block captures instead, which makes all the code supporting this flag
redundant.

This patch removes the flag and all supporting "dead" code, so we can
reuse the bit for something else in the future.

Since this only affects debug info generated by Clang with the block
extension this mostly affects Apple platforms and I don't have any
bitcode compatibility concerns for removing this. The Verifier will
reject debug info that uses the bit and thus degrade gracefully when
LTO'ing older bitcode with a newer compiler.

rdar://problem/44304813

Differential Revision: https://reviews.llvm.org/D67453

llvm-svn: 372272
2019-09-18 22:38:56 +00:00
Amy Huang 68eae49859 Add AutoUpgrade function to add new address space datalayout string to existing datalayouts.
Summary:
Add function to AutoUpgrade to change the datalayout of old X86 datalayout strings.
This adds "-p270:32:32-p271:32:32-p272:64:64" to X86 datalayouts that are otherwise valid
and don't already contain it.

This also removes the compatibility changes in https://reviews.llvm.org/D66843.
Datalayout change in https://reviews.llvm.org/D64931.

Reviewers: rnk, echristo

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67631

llvm-svn: 372267
2019-09-18 22:15:58 +00:00
David Blaikie 070598bb52 llvm-reduce: Add pass to reduce basic blocks
Patch by Diego Treviño!

Differential Revision: https://reviews.llvm.org/D66320

llvm-svn: 372264
2019-09-18 21:45:05 +00:00
Roman Lebedev c00f318224 [DAGCombine][ARM][X86] (sub Carry, X) -> (addcarry (sub 0, X), 0, Carry) fold
Summary:
`DAGCombiner::visitADDLikeCommutative()` already has a sibling fold:
`(add X, Carry) -> (addcarry X, 0, Carry)`

This fold, as suggested by @efriedma, helps recover from //some//
of the regressions of D62266

Reviewers: efriedma, deadalnix

Subscribers: javed.absar, kristof.beyls, llvm-commits, efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D62392

llvm-svn: 372259
2019-09-18 20:48:27 +00:00
Roman Lebedev a042aa1d82 [CodeGen][X86][NFC] Tests for (sub Carry, X) -> (addcarry (sub 0, X), 0, Carry) fold (D62392)
llvm-svn: 372258
2019-09-18 20:48:05 +00:00
Roman Lebedev b646dd92c2 [InstCombine] foldUnsignedUnderflowCheck(): handle last few cases (PR43251)
Summary:
I don't have a direct motivational case for this,
but it would be good to have this for completeness/symmetry.

This pattern is basically the motivational pattern from
https://bugs.llvm.org/show_bug.cgi?id=43251
but with different predicate that requires that the offset is non-zero.

The completeness bit comes from the fact that a similar pattern (offset != zero)
will be needed for https://bugs.llvm.org/show_bug.cgi?id=43259,
so it'd seem to be good to not overlook very similar patterns..

Proofs: https://rise4fun.com/Alive/21b

Also, there is something odd with `isKnownNonZero()`, if the non-zero
knowledge was specified as an assumption, it didn't pick it up (PR43267)

With this, i see no other missing folds for
https://bugs.llvm.org/show_bug.cgi?id=43251

Reviewers: spatel, nikic, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67412

llvm-svn: 372257
2019-09-18 20:10:07 +00:00
Lang Hames 366ab0d086 [AArch64] Don't implicitly enable global isel on Darwin if code-model==large.
Summary:
AArch64 GlobalISel doesn't support MachO's large code model, so this patch
adds a check for that combination before implicitly enabling it.

Reviewers: paquette

Subscribers: kristof.beyls, ributzka, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67724

llvm-svn: 372256
2019-09-18 19:56:55 +00:00
Roman Lebedev dd0170ab24 [SimplifyCFG] mergeConditionalStoreToAddress(): consider cost, not instruction count
Summary:
As it can be see in the changed test, while `div` is really costly,
we were speculating it. This does not seem correct.

Also, the old code would run for every single insturuction in BB,
instead of eagerly bailing out as soon as there are too many instructions.

This function still has a problem that `PHINodeFoldingThreshold` is
per-basic-block, while it should be for all the basic blocks.

Reviewers: efriedma, craig.topper, dmgreen, jmolloy

Reviewed By: jmolloy

Subscribers: xbolva00, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67315

llvm-svn: 372255
2019-09-18 19:46:57 +00:00
Roman Lebedev ec6b91b665 [MIPS] For vectors, select `add %x, C` as `sub %x, -C` if it results in inline immediate
Summary:
As discussed in https://reviews.llvm.org/D62341#1515637,
for MIPS `add %x, -1` isn't optimal. Unlike X86 there
are no fastpaths to matearialize such `-1`/`1` vector constants,
and `sub %x, 1` results in better codegen,
so undo canonicalization

Reviewers: atanasyan, Petar.Avramovic, RKSimon

Reviewed By: atanasyan

Subscribers: sdardis, arichardson, hiraditya, jrtc27, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66805

llvm-svn: 372254
2019-09-18 19:34:41 +00:00
Roman Lebedev 260b694904 [CodeGen][MIPS][NFC] Some standalone tests for D66805 "or vectors, select `add %x, C` as `sub %x, -C` if it results in inline immediate"
llvm-svn: 372253
2019-09-18 19:34:24 +00:00
Simon Atanasyan 164dbd386d [mips] Expand 'lw/sw' instructions for 32-bit GOT
In case of using 32-bit GOT access to the table requires two instructions
with attached %got_hi and %got_lo relocations. This patch implements
correct expansion of 'lw/sw' instructions in that case.

Differential Revision: https://reviews.llvm.org/D67705

llvm-svn: 372251
2019-09-18 19:19:47 +00:00
Roman Lebedev 8b719a3b8a [NFC][InstCombine] More tests for PR42563 "Dropping pointless masking before left shift"
For patterns c/d/e we too can deal with the pattern even if we can't
just drop the mask, we can just apply it afterwars:
   https://rise4fun.com/Alive/gslRa

llvm-svn: 372244
2019-09-18 18:38:32 +00:00
Daniel Sanders 1723364a68 Fix compile-time regression caused by rL371928
Summary:
Also fixup rL371928 for cases that occur on our out-of-tree backend

There were still quite a few intermediate APInts and this caused the
compile time of MCCodeEmitter for our target to jump from 16s up to
~5m40s. This patch, brings it back down to ~17s by eliminating pretty
much all of them using two new APInt functions (extractBitsAsZExtValue(),
insertBits() but with a uint64_t). The exact conditions for eliminating
them is that the field extracted/inserted must be <=64-bit which is
almost always true.

Note: The two new APInt API's assume that APInt::WordSize is at least
64-bit because that means they touch at most 2 APInt words. They
statically assert that's true. It seems very unlikely that someone
is patching it to be smaller so this should be fine.

Reviewers: jmolloy

Reviewed By: jmolloy

Subscribers: hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67686

llvm-svn: 372243
2019-09-18 18:14:42 +00:00
Bardia Mahjour db800c267d Data Dependence Graph Basics
Summary:
This is the first patch in a series of patches that will implement data dependence graph in LLVM. Many of the ideas used in this implementation are based on the following paper:
D. J. Kuck, R. H. Kuhn, D. A. Padua, B. Leasure, and M. Wolfe (1981). DEPENDENCE GRAPHS AND COMPILER OPTIMIZATIONS.
This patch contains support for a basic DDGs containing only atomic nodes (one node for each instruction). The edges are two fold: def-use edges and memory-dependence edges.
The implementation takes a list of basic-blocks and only considers dependencies among instructions in those basic blocks. Any dependencies coming into or going out of instructions that do not belong to those basic blocks are ignored.

The algorithm for building the graph involves the following steps in order:

  1. For each instruction in the range of basic blocks to consider, create an atomic node in the resulting graph.
  2. For each node in the graph establish def-use edges to/from other nodes in the graph.
  3. For each pair of nodes containing memory instruction(s) create memory edges between them. This part of the algorithm goes through the instructions in lexicographical order and creates edges in reverse order if the sink of the dependence occurs before the source of it.

Authored By: bmahjour

Reviewer: Meinersbur, fhahn, myhsu, xtian, dmgreen, kbarton, jdoerfert

Reviewed By: Meinersbur, fhahn, myhsu

Subscribers: ychen, arphaman, simoll, a.elovikov, mgorny, hiraditya, jfb, wuzish, llvm-commits, jsji, Whitney, etiotto

Tag: #llvm

Differential Revision: https://reviews.llvm.org/D65350

llvm-svn: 372238
2019-09-18 17:43:45 +00:00
Sanjay Patel e406a3f2d6 [InstSimplify] add tests for fma/fmuladd; NFC
llvm-svn: 372236
2019-09-18 17:27:02 +00:00
Wei Mi 5f7e822dc7 [SampleFDO] Minimize performance impact when profile-sample-accurate
is enabled.

We can save memory and reduce binary size significantly by enabling
ProfileSampleAccurate. However when ProfileSampleAccurate is true,
function without sample will be regarded as cold and this could
potentially cause performance regression.

To minimize the potential negative performance impact, we want to be
a little conservative here saying if a function shows up in the profile,
no matter as outline instance, inline instance or call targets, treat
the function as not being cold. This will handle the cases such as most
callsites of a function are inlined in sampled binary (thus outline copy
don't get any sample) but not inlined in current build (because of source
code drift, imprecise debug information, or the callsites are all cold
individually but not cold accumulatively...), so that the outline function
showing up as cold in sampled binary will actually not be cold after current
build. After the change, such function will be treated as not cold even
profile-sample-accurate is enabled.

At the same time we lower the hot criteria of callsiteIsHot check when
profile-sample-accurate is enabled. callsiteIsHot is used to determined
whether a callsite is hot and qualified for early inlining. When
profile-sample-accurate is enabled, functions without profile will be
regarded as cold and much less inlining will happen in CGSCC inlining pass,
so we can worry less about size increase and be aggressive to allow more
early inlining to happen for warm callsites and it is helpful for performance
overall.

Differential Revision: https://reviews.llvm.org/D67561

llvm-svn: 372232
2019-09-18 16:06:28 +00:00
Krasimir Georgiev 2f1bba7fd0 Revert "[AArch64][DebugInfo] Do not recompute CalleeSavedStackSize"
Summary:
This reverts commit r372204.

This change causes build bot failures under msan:
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/35236/steps/check-llvm%20msan/logs/stdio:

```
FAIL: LLVM :: DebugInfo/AArch64/asan-stack-vars.mir (19531 of 33579)
******************** TEST 'LLVM :: DebugInfo/AArch64/asan-stack-vars.mir' FAILED ********************
Script:
--
: 'RUN: at line 1';   /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/llc -O0 -start-before=livedebugvalues -filetype=obj -o - /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/test/DebugInfo/AArch64/asan-stack-vars.mir | /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/llvm-dwarfdump -v - | /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/FileCheck /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/test/DebugInfo/AArch64/asan-stack-vars.mir
--
Exit Code: 2

Command Output (stderr):
--
==62894==WARNING: MemorySanitizer: use-of-uninitialized-value
    #0 0xdfcafb in llvm::AArch64FrameLowering::resolveFrameOffsetReference(llvm::MachineFunction const&, int, bool, unsigned int&, bool, bool) const /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:1658:3
    #1 0xdfae8a in resolveFrameIndexReference /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:1580:10
    #2 0xdfae8a in llvm::AArch64FrameLowering::getFrameIndexReference(llvm::MachineFunction const&, int, unsigned int&) const /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:1536
    #3 0x46642c1 in (anonymous namespace)::LiveDebugValues::extractSpillBaseRegAndOffset(llvm::MachineInstr const&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:582:21
    #4 0x4647cb3 in transferSpillOrRestoreInst /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:883:11
    #5 0x4647cb3 in process /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:1079
    #6 0x4647cb3 in (anonymous namespace)::LiveDebugValues::ExtendRanges(llvm::MachineFunction&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:1361
    #7 0x463ac0e in (anonymous namespace)::LiveDebugValues::runOnMachineFunction(llvm::MachineFunction&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:1415:18
    #8 0x4854ef0 in llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/MachineFunctionPass.cpp:73:13
    #9 0x53b0b01 in llvm::FPPassManager::runOnFunction(llvm::Function&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/IR/LegacyPassManager.cpp:1648:27
    #10 0x53b15f6 in llvm::FPPassManager::runOnModule(llvm::Module&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/IR/LegacyPassManager.cpp:1685:16
    #11 0x53b298d in runOnModule /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/IR/LegacyPassManager.cpp:1750:27
    #12 0x53b298d in llvm::legacy::PassManagerImpl::run(llvm::Module&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/IR/LegacyPassManager.cpp:1863
    #13 0x905f21 in compileModule(char**, llvm::LLVMContext&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/tools/llc/llc.cpp:601:8
    #14 0x8fdc4e in main /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/tools/llc/llc.cpp:355:22
    #15 0x7f67673632e0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202e0)
    #16 0x882369 in _start (/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/llc+0x882369)

MemorySanitizer: use-of-uninitialized-value /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:1658:3 in llvm::AArch64FrameLowering::resolveFrameOffsetReference(llvm::MachineFunction const&, int, bool, unsigned int&, bool, bool) const
Exiting
error: -: The file was not recognized as a valid object file
FileCheck error: '-' is empty.
FileCheck command line:  /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/FileCheck /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/test/DebugInfo/AArch64/asan-stack-vars.mir
```

Reviewers: bkramer

Reviewed By: bkramer

Subscribers: sdardis, aprantl, kristof.beyls, jrtc27, atanasyan, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67710

llvm-svn: 372228
2019-09-18 14:42:09 +00:00
Sanjay Patel d46bf63fbb [SimplifyLibCalls] fix crash with empty function name (PR43347)
...and improve some variable names while here.

https://bugs.llvm.org/show_bug.cgi?id=43347

llvm-svn: 372227
2019-09-18 14:33:40 +00:00
Hans Wennborg 40fdacbf4c Follow-up to r372209: Use single quotes for host_ldflags in the lit config
HOST_LDFLAGS is now using double quotes, and that would break the lit
config file.

llvm-svn: 372226
2019-09-18 14:12:59 +00:00
Jay Foad c92e51d84b [SDA] Don't stop divergence propagation at the IPD.
Summary:
This fixes B42473 and B42706.

This patch makes the SDA propagate branch divergence until the end of the RPO traversal. Before, the SyncDependenceAnalysis propagated divergence only until the IPD in rpo order. RPO is incompatible with post dominance in the presence of loops. This made the SDA crash because blocks were missed in the propagation.

Reviewers: foad, nhaehnle

Reviewed By: foad

Subscribers: jvesely, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65274

llvm-svn: 372223
2019-09-18 13:40:22 +00:00
Simon Atanasyan 9c36de99ca [mips] Pass "xgot" flag as a subtarget feature
We need "xgot" flag in the MipsAsmParser to implement correct expansion
of some pseudo instructions in case of using 32-bit GOT (XGOT).
MipsAsmParser does not have reference to MipsSubtarget but has a
reference to "feature bit set".

llvm-svn: 372220
2019-09-18 12:24:57 +00:00
Simon Atanasyan 1ebdbad475 [mips] Mark tests for lw/sw expansion in PIC by a separate "check prefix". NFC
That simplify adding XGOT tests later.

llvm-svn: 372219
2019-09-18 12:24:30 +00:00
Tim Renouf 1786117111 [AMDGPU] Allow FP inline constant in v_madak_f16 and v_fmaak_f16
Differential Revision: https://reviews.llvm.org/D67680

Change-Id: Ic38f47cb2079c2c1070a441b5943854844d80a7c
llvm-svn: 372208
2019-09-18 09:32:06 +00:00
Sander de Smalen dc2a7f5b39 [AArch64][DebugInfo] Do not recompute CalleeSavedStackSize
This patch fixes a bug exposed by D65653 where a subsequent invocation
of `determineCalleeSaves` ends up with a different size for the callee
save area, leading to different frame-offsets in debug information.

In the invocation by PEI, `determineCalleeSaves` tries to determine
whether it needs to spill an extra callee-saved register to get an
emergency spill slot. To do this, it calls 'estimateStackSize' and
manually adds the size of the callee-saves to this. PEI then allocates
the spill objects for the callee saves and the remaining frame layout
is calculated accordingly.

A second invocation in LiveDebugValues causes estimateStackSize to return
the size of the stack frame including the callee-saves. Given that the
size of the callee-saves is added to this, these callee-saves are counted
twice, which leads `determineCalleeSaves` to believe the stack has
become big enough to require spilling an extra callee-save as emergency
spillslot. It then updates CalleeSavedStackSize with a larger value.

Since CalleeSavedStackSize is used in the calculation of the frame
offset in getFrameIndexReference, this leads to incorrect offsets for
variables/locals when this information is recalculated after PEI.

Reviewers: omjavaid, eli.friedman, thegameg, efriedma

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D66935

llvm-svn: 372204
2019-09-18 09:02:44 +00:00
Craig Topper 93e1f73b6b [X86] Break non-power of 2 vXi1 vectors into scalars for argument passing with avx512.
This generates worse code, but matches what is done for avx2 and
prevents crashes when more arguments are passed than we have
registers for.

llvm-svn: 372200
2019-09-18 06:06:11 +00:00
Craig Topper 11082d5366 [X86] Add test case for passing a v17i1 vector with avx512
llvm-svn: 372199
2019-09-18 06:06:07 +00:00
Yonghong Song c68ee0ce70 [BPF] Permit all user instructed offset relocatiions
Currently, not all user specified relocations
(with clang intrinsic __builtin_preserve_access_index())
will turn into relocations.

In the current implementation, a __builtin_preserve_access_index()
chain is turned into relocation only if the result of the clang
intrinsic is used in a function call or a nonzero offset computation
of getelementptr. For all other cases, the relocatiion request
is ignored and the __builtin_preserve_access_index() is turned
into regular getelementptr instructions.
The main reason is to mimic bpf_probe_read() requirement.

But there are other use cases where relocatable offset is
generated but not used for bpf_probe_read(). This patch
relaxed previous constraints when to generate relocations.
Now, all user __builtin_preserve_access_index() will have
relocations generated.

Differential Revision: https://reviews.llvm.org/D67688

llvm-svn: 372198
2019-09-18 03:49:07 +00:00
Teresa Johnson fd2044f299 [PGO] Change hardcoded thresholds for cold/inlinehint to use summary
Summary:
The PGO counter reading will add cold and inlinehint (hot) attributes
to functions that are very cold or hot. This was using hardcoded
thresholds, instead of the profile summary cutoffs which are used in
other hot/cold detection and are more dynamic and adaptable. Switch
to using the summary-based cold/hot detection.

The hardcoded limits were causing some code that had a medium level of
hotness (per the summary) to be incorrectly marked with a cold
attribute, blocking inlining.

Reviewers: davidxl

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67673

llvm-svn: 372189
2019-09-17 23:12:13 +00:00
Eli Friedman ddf5e86c22 [ARM] VFPv2 only supports 16 D registers.
r361845 changed the way we handle "D16" vs. "D32" targets; there used to
be a negative "d16" which removed instructions from the instruction set,
and now there's a "d32" feature which adds instructions to the
instruction set.  This is good, but there was an oversight in the
implementation: the behavior of VFPv2 was changed.  In particular, the
"vfp2" feature was changed to imply "d32". This is wrong: VFPv2 only
supports 16 D registers.

In practice, this means if you specify -mfpu=vfpv2, the compiler will
generate illegal instructions.

This patch gets rid of "vfp2d16" and "vfp2d16sp", and fixes "vfp2" and
"vfp2sp" so they don't imply "d32".

Differential Revision: https://reviews.llvm.org/D67375

llvm-svn: 372186
2019-09-17 21:42:38 +00:00
Reid Kleckner 23e872a3d0 [PGO] Don't use comdat groups for counters & data on COFF
For COFF, a comdat group is really a symbol marked
IMAGE_COMDAT_SELECT_ANY and zero or more other symbols marked
IMAGE_COMDAT_SELECT_ASSOCIATIVE. Typically the associative symbols in
the group are not external and are not referenced by other TUs, they are
things like debug info, C++ dynamic initializers, or other section
registration schemes. The Visual C++ linker reports a duplicate symbol
error for symbols marked IMAGE_COMDAT_SELECT_ASSOCIATIVE even if they
would be discarded after handling the leader symbol.

Fixes coverage-inline.cpp in check-profile after r372020.

llvm-svn: 372182
2019-09-17 21:10:49 +00:00
Jessica Paquette 8a4d9f04b5 [AArch64][GlobalISel] Support -tailcallopt
This adds support for `-tailcallopt` tail calls to CallLowering. This
piggy-backs off the changes from D67577, since doing it without a bit of
refactoring gets extremely ugly.

Support is basically ported from AArch64ISelLowering. The main difference here
is that tail calls in `-tailcallopt` change the ABI, so there's some extra
bookkeeping for the stack.

Show that we are correctly lowering these by updating tail-call.ll.

Also show that we don't do anything strange in general by updating
fastcc-reserved.ll, which passes `-tailcallopt`, but doesn't emit any tail
calls.

Differential Revision: https://reviews.llvm.org/D67580

llvm-svn: 372177
2019-09-17 20:24:23 +00:00
Roman Lebedev bed6e08e23 [NFC][InstCombine] More tests for "Dropping pointless masking before left shift" (PR42563)
While we already fold that pattern if the sum of shift amounts is not
smaller than bitwidth, there's painfully obvious generalization:
  https://rise4fun.com/Alive/F5R
I.e. the "sub of shift amounts" tells us how many bits will be left
in the output. If it's less than bitwidth, we simply need to
apply a mask, which is constant.

llvm-svn: 372170
2019-09-17 19:32:11 +00:00
Bardia Mahjour 6476d7cf0b Revert "Data Dependence Graph Basics"
This reverts commit c98ec60993, which broke the sphinx-docs build.

llvm-svn: 372168
2019-09-17 19:22:01 +00:00
Bardia Mahjour c98ec60993 Data Dependence Graph Basics
Summary:
This is the first patch in a series of patches that will implement data dependence graph in LLVM. Many of the ideas used in this implementation are based on the following paper:
D. J. Kuck, R. H. Kuhn, D. A. Padua, B. Leasure, and M. Wolfe (1981). DEPENDENCE GRAPHS AND COMPILER OPTIMIZATIONS.
This patch contains support for a basic DDGs containing only atomic nodes (one node for each instruction). The edges are two fold: def-use edges and memory-dependence edges.
The implementation takes a list of basic-blocks and only considers dependencies among instructions in those basic blocks. Any dependencies coming into or going out of instructions that do not belong to those basic blocks are ignored.

The algorithm for building the graph involves the following steps in order:

  1. For each instruction in the range of basic blocks to consider, create an atomic node in the resulting graph.
  2. For each node in the graph establish def-use edges to/from other nodes in the graph.
  3. For each pair of nodes containing memory instruction(s) create memory edges between them. This part of the algorithm goes through the instructions in lexicographical order and creates edges in reverse order if the sink of the dependence occurs before the source of it.

Authored By: bmahjour

Reviewer: Meinersbur, fhahn, myhsu, xtian, dmgreen, kbarton, jdoerfert

Reviewed By: Meinersbur, fhahn, myhsu

Subscribers: ychen, arphaman, simoll, a.elovikov, mgorny, hiraditya, jfb, wuzish, llvm-commits, jsji, Whitney, etiotto

Tag: #llvm

Differential Revision: https://reviews.llvm.org/D65350

llvm-svn: 372162
2019-09-17 18:55:44 +00:00
Craig Topper f9a89b6788 [X86] Simplify b2b KSHIFTL+KSHIFTR using demanded elts.
llvm-svn: 372155
2019-09-17 18:02:56 +00:00
Craig Topper f1ba94ade0 [X86] Call SimplifyDemandedVectorElts on KSHIFTL/KSHIFTR nodes during DAG combine.
llvm-svn: 372154
2019-09-17 18:02:52 +00:00
David Bolvansky 0c0de794f1 Reland "[SLC] Preserve attrs for strncpy(x, "", y) -> memset(align 1 x, '\0', y)"
llvm-svn: 372142
2019-09-17 17:12:24 +00:00
Nemanja Ivanovic 1461fb6e78 [PowerPC] Exploit single instruction load-and-splat for word and doubleword
We currently produce a load, followed by (possibly a move for integers and) a
splat as separate instructions. VSX has always had a splatting load for
doublewords, but as of Power9, we have it for words as well. This patch just
exploits these instructions.

Differential revision: https://reviews.llvm.org/D63624

llvm-svn: 372139
2019-09-17 16:45:20 +00:00
Alina Sbirlea 4e9082ef95 [MemorySSA] Fix phi insertion when inserting a def.
Summary:
When inserting a Def, the current algorithm is walking edges backward
and inserting new Phis where needed. There may be additional Phis needed
in the IDF of the newly inserted Def and Phis.
Adding Phis in the IDF of the Def was added ina  previous patch, but we
may also need other Phis in the IDF of the newly added Phis.

Reviewers: george.burgess.iv

Subscribers: Prazek, sanjoy.google, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67637

llvm-svn: 372138
2019-09-17 16:33:35 +00:00
Alina Sbirlea 6b2d1346d8 [MemorySSA] Update MSSA for non-conventional AA.
Summary:
Regularly when moving an instruction that may not read or write memory,
the instruction is not modelled in MSSA, so not action is necessary.
For a non-conventional AA pipeline, MSSA needs to explicitly check when
creating accesses, so as to not model instructions that may not read and
write memory.

Reviewers: george.burgess.iv

Subscribers: Prazek, sanjoy.google, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67562

llvm-svn: 372137
2019-09-17 16:31:37 +00:00
David Green 91724b8530 [ARM] Add a SelectTAddrModeImm7 for MVE narrow loads and stores
We were previously using the SelectT2AddrModeImm7 for both normal and narrowing
MVE loads/stores. As the narrowing instructions do not accept sp as a register,
it makes little sense to optimise a FrameIndex into the load, only to have to
recover that later on. This adds a SelectTAddrModeImm7 which does not do that
folding, and uses it for narrowing load/store patterns.

Differential Revision: https://reviews.llvm.org/D67489

llvm-svn: 372134
2019-09-17 15:32:28 +00:00
David Green c42ca16cfa [ARM] Fixup pipeline test. NFC
llvm-svn: 372133
2019-09-17 15:25:24 +00:00
David Green 22a2209433 [ARM] Reserve an emergency spill slot for fp16 addressing modes that need it
Similar to D67327, but this time for the FP16 VLDR and VSTR instructions that
use the AddrMode5FP16 addressing mode. We need to reserve an emergency spill
slot for instructions that will be out of range to use sp directly.
AddrMode5FP16 is 8 bits with a scale of 2.

Differential Revision: https://reviews.llvm.org/D67483

llvm-svn: 372132
2019-09-17 15:23:09 +00:00
Sam Parker 1d9ba08543 [ARM] Fix for buildbots
Remove setPreservesCFG from ARMConstantIslandPass and add a couple
of -verify-machine-dom-info instances into the existing codegen
tests.

llvm-svn: 372126
2019-09-17 14:21:36 +00:00
Krasimir Georgiev bdff164e0e Revert "[SLC] Preserve attrs for strncpy(x, "", y) -> memset(align 1 x, '\0', y)"
Summary:
This reverts commit r372101.

Causes ASAN build bot failures:

http://lab.llvm.org:8011/builders/sanitizer-ppc64be-linux/builds/14176
From http://lab.llvm.org:8011/builders/sanitizer-ppc64be-linux/builds/14176/steps/64-bit%20check-asan/logs/stdio:

```
[ RUN      ] AddressSanitizer.StrNCatOOBTest
/home/buildbots/ppc64be-sanitizer/sanitizer-ppc64be/build/llvm-project/compiler-rt/lib/asan/tests/asan_str_test.cpp:462: Failure
Death test: strncat(to - 1, from, 0)
    Result: failed to die.
```

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67658

llvm-svn: 372125
2019-09-17 14:15:23 +00:00
George Rimar a3569aced0 [llvm-readobj/llvm-objdump] - Improve how tool locate the dynamic table and report warnings about that.
Before this patch we gave a priority to a dynamic table found
from the section header.

It was discussed (here: https://reviews.llvm.org/D67078?id=218356#inline-602082)
that probably preferring the table from PT_DYNAMIC is better,
because it is what runtime loader sees.

This patch makes the table from PT_DYNAMIC be chosen at first place if it is available.
But also it adds logic to fall back to SHT_DYNAMIC if the table from the dynamic segment is
broken or fall back to use no table if both are broken.

It adds a few more diagnostic warnings for the logic above.

Differential revision: https://reviews.llvm.org/D67547

llvm-svn: 372122
2019-09-17 13:58:46 +00:00
Sam Parker f1d069e54d [ARM] Fix for buildbots
Add --verifymachineinstrs and update the remaining low overhead loop
tests.

llvm-svn: 372121
2019-09-17 13:46:26 +00:00
David Green 1ff9553057 [ARM] Fix for MVE load/store stack accesses
MVE loads and stores have a 7 bit immediate range, scaled by the length of the type. This needs to be taught to the stack estimation code to ensure that an emergency spill slot is reserved in case we run out of registers when materialising stack indices.

Also the narrowing loads/stores can be created with frame indices even though they do not accept SP as a register. We need in those cases to make sure we have an emergency register to use as the frame base, as SP can never be used.

Differential Revision: https://reviews.llvm.org/D67327

llvm-svn: 372114
2019-09-17 12:58:51 +00:00
Sam Parker 36c922278e [ARM][LowOverheadLoops] Add LR def safety check
Converting the *LoopStart pseudo instructions into DLS/WLS results in
LR being defined. These instructions were inserted on the assumption
that LR would already contain the loop counter because a mov is
introduced during ISel as the the consumers in the loop can only use
LR. That assumption proved wrong!

So perform a safety check, finding an appropriate place to insert the
DLS/WLS instructions or revert if this isn't possible.

Differential Revision: https://reviews.llvm.org/D67539

llvm-svn: 372111
2019-09-17 12:19:32 +00:00
George Rimar 589293800a [llvm-readobj] - Test PPC64 relocations properly.
We had a precompiled binary committed and not all of the relocations
supported were tested. This patch fixes this.

Differential revision: https://reviews.llvm.org/D67617

llvm-svn: 372110
2019-09-17 12:05:39 +00:00
George Rimar 82d83733dd [obj2yaml] - Support PPC64 relocation types.
We do not support them and fail with llvm_unreachable currently.
This is not the only target we do not support and also seems we are missing
the tests for those we have already. But I needed this one for another patch,
so posted it separatelly.

Relocation names are taken from llvm\include\llvm\BinaryFormat\ELFRelocs\PowerPC64.def

Differential revision: https://reviews.llvm.org/D67615

llvm-svn: 372109
2019-09-17 12:00:55 +00:00
George Rimar cfc0ba3852 [yaml2obj/obj2yaml] - Allow setting an arbitrary values for e_machine.
Currently we only allow using a known named constants
for `Machine` field in YAML documents.

This patch allows using any numbers (valid or "unknown")
and adds test cases for current and new functionality.

With this it is possible to write a test cases for really unknown
EM_* targets.

Differential revision: https://reviews.llvm.org/D67652

llvm-svn: 372108
2019-09-17 11:51:26 +00:00
Luis Marques 3d0fbafd0b [RISCV] Switch to the Machine Scheduler
Most of the test changes are trivial instruction reorderings and differing
register allocations, without any obvious performance impact.

Differential Revision: https://reviews.llvm.org/D66973

llvm-svn: 372106
2019-09-17 11:15:35 +00:00
Johannes Doerfert 3ab9e8b818 [Attributor][Fix] Initialize the cache prior to using it
Summary:
There were segfaults as we modified and iterated the instruction maps in
the cache at the same time. This was happening because we created new
instructions while we populated the cache. This fix changes the order
in which we perform these actions. First, the caches for the whole
module are created, then we start to create abstract attributes.

I don't have a unit test but the LLVM test suite exposes this problem.

Reviewers: uenoku, sstefan1

Subscribers: hiraditya, bollu, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67232

llvm-svn: 372105
2019-09-17 10:52:41 +00:00
Luis Marques 2d550d19b3 Revert Patch from Phabricator
This reverts r372092 (git commit e38695a025)

llvm-svn: 372104
2019-09-17 10:52:09 +00:00
David Bolvansky ded48e93e6 [SLC] Preserve attrs for strncpy(x, "", y) -> memset(align 1 x, '\0', y)
llvm-svn: 372101
2019-09-17 10:25:38 +00:00
David Bolvansky be2487a2ba [InstCombine] Annotate strdup with deref_or_null
llvm-svn: 372098
2019-09-17 10:12:48 +00:00
David Bolvansky 3d33e97be6 [NFC} Updated test
llvm-svn: 372093
2019-09-17 09:45:52 +00:00
Luis Marques e38695a025 Patch from Phabricator
llvm-svn: 372092
2019-09-17 09:43:08 +00:00
David Bolvansky e80fcf0340 [SimplifyLibCalls] Mark known arguments with nonnull
Reviewers: efriedma, jdoerfert

Reviewed By: jdoerfert

Subscribers: ychen, rsmith, joerg, aaron.ballman, lebedev.ri, uenoku, jdoerfert, hfinkel, javed.absar, spatel, dmgreen, llvm-commits

Differential Revision: https://reviews.llvm.org/D53342

llvm-svn: 372091
2019-09-17 09:32:52 +00:00
George Rimar 48de660bbf [llvm-readobj] - Fix BB after r372087.
Seems I forgot to update the number of bytes checked.

llvm-svn: 372089
2019-09-17 09:26:49 +00:00
Fangrui Song 1ecba6f8ef [llvm-ar] Parse 'h' and '-h': display help and exit
Support `llvm-ar h` and `llvm-ar -h` because they may be what users try
at first. Note, operation 'h' is undocumented in GNU ar.

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D67560

llvm-svn: 372088
2019-09-17 09:25:52 +00:00
George Rimar de1bef0b1b [llvm-readobj] - Fix a TODO in elf-reloc-zero-name-or-value.test.
The "TODO" mentioned was:

"Add test for symbol with no name but with a value once yaml2obj allows
referencing symbols with no name from relocations."

We can do it now.

Differential revision: https://reviews.llvm.org/D67609

llvm-svn: 372087
2019-09-17 09:12:10 +00:00
Alexander Timofeev 6524a7a2b9 [AMDGPU]: PHI Elimination hooks added for custom COPY insertion. Fixed
Defferential Revision: https://reviews.llvm.org/D67101

Reviewers: rampitec, vpykhtin
llvm-svn: 372086
2019-09-17 09:08:58 +00:00
Sam Parker 95b28a4c72 [ARM] LE support in ConstantIslands
The low-overhead branch extension provides a loop-end 'LE' instruction
that performs no decrement nor compare, it just jumps backwards. This
patch modifies the constant islands pass to try to insert LE
instructions in place of a Thumb2 conditional branch, instead of
shrinking it. This only happens if a cmp can be converted to a cbn/z
and used to exit the loop.

Differential Revision: https://reviews.llvm.org/D67404

llvm-svn: 372085
2019-09-17 09:08:05 +00:00
Florian Hahn 1bd58870e5 [LoopUnroll] Use LoopSize+1 as threshold, to allow unrolling loops matching LoopSize.
We use `< UP.Threshold` later on, so we should use LoopSize + 1, to
allow unrolling if the result won't exceed to loop size.

Fixes PR43305.

Reviewers: efriedma, dmgreen, paquette

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D67594

llvm-svn: 372084
2019-09-17 09:02:48 +00:00
George Rimar a5dfa70806 [llvm-objcopy] - Remove python invocations from 2 test cases.
It is possible to use yaml2obj to create sections with overlapping sh_offset now.
This patch does that.

Differential revision: https://reviews.llvm.org/D67610

llvm-svn: 372081
2019-09-17 08:38:53 +00:00
Hideto Ueno 30d86f1858 [Attributor] Use Alias Analysis in noalias callsite argument deduction
Summary: This patch adds a check of alias analysis in `noalias` callsite argument deduction.

Reviewers: jdoerfert, sstefan1

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67604

llvm-svn: 372075
2019-09-17 06:53:27 +00:00
Craig Topper 95aea74494 [X86] Split oversized vXi1 vector arguments and return values into scalars on avx512 targets.
Previously we tried to split them into narrower v64i1 or v16i1
pieces that each got promoted to vXi8 and then passed in a zmm
or xmm register. But this crashes when you need to pass more
pieces than available registers reserved for argument passing.

The scalarizing done here generates much longer and slower code,
but is consistent with the behavior of avx2 and earlier targets
for these types.

Fixes PR43323.

llvm-svn: 372069
2019-09-17 04:41:14 +00:00
Craig Topper 769dd59a27 [X86] Allow masked VBROADCAST instructions to be turned into BLENDM with a broadcast load to avoid a copy.
The BLENDM instructions allow an 2 sources and an independent
destination while masked VBROADCAST has the destination tied
to the source.

llvm-svn: 372068
2019-09-17 04:41:10 +00:00
Craig Topper 2cc57bedd5 [X86] Add support for commuting EVEX VCMP instructons with any immediate value.
Previously we limited to the EQ/NE/TRUE/FALSE/ORD/UNORD immediates.

llvm-svn: 372067
2019-09-17 04:41:05 +00:00
Craig Topper d51576a3f0 [X86] Add test case for missed opportunity to commute a VCMP instruction after unfolding one load in order to fold another load.
llvm-svn: 372066
2019-09-17 04:41:01 +00:00
Craig Topper 359918dadf [X86] Enable commuting of EVEX VCMP for all immediate values during isel.
llvm-svn: 372065
2019-09-17 04:40:58 +00:00
David Blaikie f27367cd32 llvm-reduce: Clean out previous test temp/output dir, since it was a dir and now it's used as just a single file
llvm-svn: 372054
2019-09-16 23:56:26 +00:00
Amara Emerson 9d64721ca5 [GlobalISel] Partially revert r371901.
r371901 was overeager and widenScalarDst() and the like in the legalizer
attempt to increment the insert point given in order to add new instructions
after the currently legalizing inst. In cases where the insertion point is not
exactly the current instruction, then callers need to de-compensate for the
behaviour by decrementing the insertion iterator before calling them. It's not
a nice state of affairs, for now just undo the problematic parts of the change.

llvm-svn: 372050
2019-09-16 23:46:03 +00:00
David Blaikie cb4aee7318 llvm-reduce: Make tests shell-independent by passing the interpreter on the command line rather than using #! in the test file
llvm-svn: 372049
2019-09-16 23:41:19 +00:00
Bardia Mahjour 474c713fc7 [NFC] Test commit access
llvm-svn: 372033
2019-09-16 20:44:15 +00:00
Lei Huang bfb197d7a3 [PowerPC] Cust lower fpext v2f32 to v2f64 from extract_subvector v4f32
This is a follow up patch from https://reviews.llvm.org/D57857 to handle
extract_subvector v4f32.  For cases where we fpext of v2f32 to v2f64 from
extract_subvector we currently generate on P9 the following:

  lxv 0, 0(3)
  xxsldwi 1, 0, 0, 1
  xscvspdpn 2, 0
  xxsldwi 3, 0, 0, 3
  xxswapd 0, 0
  xscvspdpn 1, 1
  xscvspdpn 3, 3
  xscvspdpn 0, 0
  xxmrghd 0, 0, 3
  xxmrghd 1, 2, 1
  stxv 0, 0(4)
  stxv 1, 0(5)

This patch custom lower it to the following sequence:

  lxv 0, 0(3)       # load the v4f32 <w0, w1, w2, w3>
  xxmrghw 2, 0, 0   # Produce the following vector <w0, w0, w1, w1>
  xxmrglw 3, 0, 0   # Produce the following vector <w2, w2, w3, w3>
  xvcvspdp 2, 2     # FP-extend to <d0, d1>
  xvcvspdp 3, 3     # FP-extend to <d2, d3>
  stxv 2, 0(5)      # Store <d0, d1> (%vecinit11)
  stxv 3, 0(4)      # Store <d2, d3> (%vecinit4)

Differential Revision: https://reviews.llvm.org/D61961

llvm-svn: 372029
2019-09-16 20:04:15 +00:00
Reid Kleckner 32837a0c93 [PGO] Use linkonce_odr linkage for __profd_ variables in comdat groups
This fixes relocations against __profd_ symbols in discarded sections,
which is PR41380.

In general, instrumentation happens very early, and optimization and
inlining happens afterwards. The counters for a function are calculated
early, and after inlining, counters for an inlined function may be
widely referenced by other functions.

For C++ inline functions of all kinds (linkonce_odr &
available_externally mainly), instr profiling wants to deduplicate these
__profc_ and __profd_ globals. Otherwise the binary would be quite
large.

I made __profd_ and __profc_ comdat in r355044, but I chose to make
__profd_ internal. At the time, I was only dealing with coverage, and in
that case, none of the instrumentation needs to reference __profd_.
However, if you use PGO, then instrumentation passes add calls to
__llvm_profile_instrument_range which reference __profd_ globals. The
solution is to make these globals externally visible by using
linkonce_odr linkage for data as was done for counters.

This is safe because PGO adds a CFG hash to the names of the data and
counter globals, so if different TUs have different globals, they will
get different data and counter arrays.

Reviewers: xur, hans

Differential Revision: https://reviews.llvm.org/D67579

llvm-svn: 372020
2019-09-16 18:49:09 +00:00
Roman Lebedev 69911b8d01 [ARM][Codegen] Autogenerate arm-cgp-casts.ll test.
Apparently it got broken by r372009 while i thought it was r372012.

llvm-svn: 372019
2019-09-16 18:28:22 +00:00
Simon Pilgrim 3df0daddfd [X86][AVX] matchShuffleWithSHUFPD - add support for zeroable operands
Determine if all of the uses of LHS/RHS operands can be replaced with a zero vector.

llvm-svn: 372013
2019-09-16 17:30:33 +00:00
David Green 8d21460dc5 [ARM] A predicate cast of a predicate cast is a predicate cast
The adds some very basic folding of PREDICATE_CASTS, removing cases when they
are chained together. These would already be removed eventually, as these are
lowered to copies. This just allows it to happen earlier, which can help other
simplifications.

Differential Revision: https://reviews.llvm.org/D67591

llvm-svn: 372012
2019-09-16 17:29:07 +00:00
Roman Lebedev 10151f6618 [SimplifyCFG] FoldTwoEntryPHINode(): consider *total* speculation cost, not per-BB cost
Summary:
Previously, if the threshold was 2, we were willing to speculatively
execute 2 cheap instructions in both basic blocks (thus we were willing
to speculatively execute cost = 4), but weren't willing to speculate
when one BB had 3 instructions and other one had no instructions,
even thought that would have total cost of 3.

This looks inconsistent to me.
I don't think `cmov`-like instructions will start executing
until both of it's inputs are available: https://godbolt.org/z/zgHePf
So i don't see why the existing behavior is the correct one.

Also, let's add it's own `cl::opt` for this threshold,
with default=4, so it is not stricter than the previous threshold:
will allow to fold when there are 2 BB's each with cost=2.
And since the logic has changed, it will also allow to fold when
one BB has cost=3 and other cost=1, or there is only one BB with cost=4.

This is an alternative solution to D65148:
This fix is mainly motivated by `signbit-like-value-extension.ll` test.
That pattern comes up in JPEG decoding, see e.g.
`Figure F.12 – Extending the sign bit of a decoded value in V`
of `ITU T.81` (JPEG specification).
That branch is not predictable, and it is within the innermost loop,
so the fact that that pattern ends up being stuck with a branch
instead of `select` (i.e. `CMOV` for x86) is unlikely to be beneficial.

This has great results on the final assembly (vanilla test-suite + RawSpeed): (metric pass - D67240)
| metric                                 |     old |     new | delta |      % |
| x86-mi-counting.NumMachineFunctions    |   37720 |   37721 |     1 |  0.00% |
| x86-mi-counting.NumMachineBasicBlocks  |  773545 |  771181 | -2364 | -0.31% |
| x86-mi-counting.NumMachineInstructions | 7488843 | 7486442 | -2401 | -0.03% |
| x86-mi-counting.NumUncondBR            |  135770 |  135543 |  -227 | -0.17% |
| x86-mi-counting.NumCondBR              |  423753 |  422187 | -1566 | -0.37% |
| x86-mi-counting.NumCMOV                |   24815 |   25731 |   916 |  3.69% |
| x86-mi-counting.NumVecBlend            |      17 |      17 |     0 |  0.00% |

We significantly decrease basic block count, notably decrease instruction count,
significantly decrease branch count and very significantly increase `cmov` count.

Performance-wise, unsurprisingly, this has great effect on
target RawSpeed benchmark. I'm seeing 5 **major** improvements:
```
Benchmark                                                                                             Time             CPU      Time Old      Time New       CPU Old       CPU New
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Samsung/NX3000/_3184416.SRW/threads:8/process_time/real_time_pvalue                                 0.0000          0.0000      U Test, Repetitions: 49 vs 49
Samsung/NX3000/_3184416.SRW/threads:8/process_time/real_time_mean                                  -0.3064         -0.3064      226.9913      157.4452      226.9800      157.4384
Samsung/NX3000/_3184416.SRW/threads:8/process_time/real_time_median                                -0.3057         -0.3057      226.8407      157.4926      226.8282      157.4828
Samsung/NX3000/_3184416.SRW/threads:8/process_time/real_time_stddev                                -0.4985         -0.4954        0.3051        0.1530        0.3040        0.1534
Kodak/DCS760C/86L57188.DCR/threads:8/process_time/real_time_pvalue                                  0.0000          0.0000      U Test, Repetitions: 49 vs 49
Kodak/DCS760C/86L57188.DCR/threads:8/process_time/real_time_mean                                   -0.1747         -0.1747       80.4787       66.4227       80.4771       66.4146
Kodak/DCS760C/86L57188.DCR/threads:8/process_time/real_time_median                                 -0.1742         -0.1743       80.4686       66.4542       80.4690       66.4436
Kodak/DCS760C/86L57188.DCR/threads:8/process_time/real_time_stddev                                 +0.6089         +0.5797        0.0670        0.1078        0.0673        0.1062
Sony/DSLR-A230/DSC08026.ARW/threads:8/process_time/real_time_pvalue                                 0.0000          0.0000      U Test, Repetitions: 49 vs 49
Sony/DSLR-A230/DSC08026.ARW/threads:8/process_time/real_time_mean                                  -0.1598         -0.1598      171.6996      144.2575      171.6915      144.2538
Sony/DSLR-A230/DSC08026.ARW/threads:8/process_time/real_time_median                                -0.1598         -0.1597      171.7109      144.2755      171.7018      144.2766
Sony/DSLR-A230/DSC08026.ARW/threads:8/process_time/real_time_stddev                                +0.4024         +0.3850        0.0847        0.1187        0.0848        0.1175
Canon/EOS 77D/IMG_4049.CR2/threads:8/process_time/real_time_pvalue                                  0.0000          0.0000      U Test, Repetitions: 49 vs 49
Canon/EOS 77D/IMG_4049.CR2/threads:8/process_time/real_time_mean                                   -0.0550         -0.0551      280.3046      264.8800      280.3017      264.8559
Canon/EOS 77D/IMG_4049.CR2/threads:8/process_time/real_time_median                                 -0.0554         -0.0554      280.2628      264.7360      280.2574      264.7297
Canon/EOS 77D/IMG_4049.CR2/threads:8/process_time/real_time_stddev                                 +0.7005         +0.7041        0.2779        0.4725        0.2775        0.4729
Canon/EOS 5DS/2K4A9929.CR2/threads:8/process_time/real_time_pvalue                                  0.0000          0.0000      U Test, Repetitions: 49 vs 49
Canon/EOS 5DS/2K4A9929.CR2/threads:8/process_time/real_time_mean                                   -0.0354         -0.0355      316.7396      305.5208      316.7342      305.4890
Canon/EOS 5DS/2K4A9929.CR2/threads:8/process_time/real_time_median                                 -0.0354         -0.0356      316.6969      305.4798      316.6917      305.4324
Canon/EOS 5DS/2K4A9929.CR2/threads:8/process_time/real_time_stddev                                 +0.0493         +0.0330        0.3562        0.3737        0.3563        0.3681
```

That being said, it's always best-effort, so there will likely
be cases where this worsens things.

Reviewers: efriedma, craig.topper, dmgreen, jmolloy, fhahn, Carrot, hfinkel, chandlerc

Reviewed By: jmolloy

Subscribers: xbolva00, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67318

llvm-svn: 372009
2019-09-16 16:18:24 +00:00
Sanjay Patel 3961a143e1 [InstCombine] remove unneeded one-use checks for icmp fold
Related folds were added in:
rL125734
...the code comment about register pressure is discussed in
more detail in:
https://bugs.llvm.org/show_bug.cgi?id=2698

But 10 years later, perf testing bzip2 with this change now
shows a slight (0.2% average) improvement on Haswell although
that's probably within test noise.

Given that this is IR canonicalization, we shouldn't be worried
about register pressure though; the backend should be able to
adjust for that as needed.

This is part of solving PR43310 the theoretically right way:
https://bugs.llvm.org/show_bug.cgi?id=43310
...ie, if we don't cripple basic transforms, then we won't
need to add special-case code to detect larger patterns.

rL371940 and rL371981 are related patches in this series.

llvm-svn: 372007
2019-09-16 16:15:25 +00:00
Sanjay Patel 4d9d0f9cf5 [InstCombine] move tests for icmp+add; NFC
llvm-svn: 372004
2019-09-16 15:33:40 +00:00
Oliver Cruickshank ee6fbebbaf [ARM] Add patterns for BSWAP intrinsic on MVE
BSWAP can use the VREV instruction on MVE to produce better results than
expanding.

llvm-svn: 372002
2019-09-16 15:20:10 +00:00
Oliver Cruickshank e9510a6cad [ARM] Add patterns for bitreverse intrinsic on MVE
BITREVERSE can use the VBRSR which will reverse and right shift.
Shifting right by 0 will just reverse the bits.

llvm-svn: 372001
2019-09-16 15:20:03 +00:00
Oliver Cruickshank 5f799ef162 [ARM] Lower CTTZ on MVE
Lower CTTZ on MVE using VBRSR and VCLS which will reverse the bits and
count the leading zeros, equivalent to a count trailing zeros (CTTZ).

llvm-svn: 372000
2019-09-16 15:19:56 +00:00
Oliver Cruickshank cd1a0b9271 [ARM] Add patterns for CTLZ on MVE
CTLZ intrinsic can use the VCLS instruction on MVE, which produces
better results than expanding.

llvm-svn: 371999
2019-09-16 15:19:49 +00:00
Sjoerd Meijer c2bafadd7a [LV] Add ARM MVE tail-folding tests
Now that the vectorizer can do tail-folding (rL367592), and the ARM backend
understands MVE masked loads/stores (rL371932), it's time to add the MVE
tail-folding equivalent of the X86 tests that I added.

llvm-svn: 371996
2019-09-16 14:56:26 +00:00
Matt Arsenault 07b8597656 AMDGPU/GlobalISel: Fix some broken run lines
llvm-svn: 371992
2019-09-16 14:14:40 +00:00
Matt Arsenault 1fc07d6648 AMDGPU/GlobalISel: Fix RegBankSelect for G_FRINT and G_FCEIL
llvm-svn: 371991
2019-09-16 14:14:37 +00:00
Matt Arsenault bf7524db35 AMDGPU/GlobalISel: Remove another illegal select test
llvm-svn: 371990
2019-09-16 14:14:31 +00:00
Sanjay Patel f201b1c918 [InstCombine] add/move tests for icmp with add operand; NFC
llvm-svn: 371988
2019-09-16 14:05:19 +00:00
David Green ce7328cb61 [ARM] Fold VCMP into VPT
MVE has VPT instructions, which perform the duties of both a VCMP and a VPST in
a single instruction, performing the compare and starting the VPT block in one.
This teaches the MVEVPTBlockPass to fold them, searching back through the
basicblock for a valid VCMP and creating the VPT from its operands.

There are some changes to the VPT instructions to accommodate this, altering
the order of the operands to match the VCMP better, and changing P0 register
defs to be VPR defs, as is used in other places.

Differential Revision: https://reviews.llvm.org/D66577

llvm-svn: 371982
2019-09-16 13:02:41 +00:00
Sanjay Patel c5cd808156 [InstCombine] remove unneeded one-use checks for icmp fold
This fold and several others were added in:
rL125734 <https://reviews.llvm.org/rL125734>
...with no explanation for the one-use checks other than the code
comments about register pressure.

Given that this is IR canonicalization, we shouldn't be worried
about register pressure though; the backend should be able to
adjust for that as needed.

This is part of solving PR43310 the theoretically right way:
https://bugs.llvm.org/show_bug.cgi?id=43310
...ie, if we don't cripple basic transforms, then we won't
need to add special-case code to detect larger patterns.

rL371940 is a related patch in this series.

llvm-svn: 371981
2019-09-16 12:54:34 +00:00
Sanjay Patel 14ce3fde04 [InstCombine] add icmp tests with extra uses; NFC
llvm-svn: 371979
2019-09-16 12:19:18 +00:00
Kerry McLaughlin e55b3bf40e [SVE][Inline-Asm] Add constraints for SVE predicate registers
Summary:
Adds the following inline asm constraints for SVE:
  - Upl: One of the low eight SVE predicate registers, P0 to P7 inclusive
  - Upa: SVE predicate register with full range, P0 to P15

Reviewers: t.p.northover, sdesmalen, rovka, momchil.velikov, cameron.mcinally, greened, rengolin

Reviewed By: rovka

Subscribers: javed.absar, tschuett, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66524

llvm-svn: 371967
2019-09-16 09:45:27 +00:00
Sjoerd Meijer b1e1a26e8e [AArch64] Some more FP16 FMA pattern matching
After our previous machinecombiner exercises (rL371321, rL371818, rL371833), we
were still missing a few FP16 FMA patterns.

Differential Revision: https://reviews.llvm.org/D67576

llvm-svn: 371960
2019-09-16 07:32:13 +00:00
Matt Arsenault 255d157672 AMDGPU/GlobalISel: Remove illegal select tests
These fail in a release build.

llvm-svn: 371955
2019-09-16 04:21:10 +00:00
Matt Arsenault bc8de8a8da AMDGPU/GlobalISel: Select SMRD loads for more types
llvm-svn: 371954
2019-09-16 00:54:07 +00:00
Matt Arsenault 48b158acae AMDGPU/GlobalISel: RegBankSelect for kill
llvm-svn: 371953
2019-09-16 00:48:37 +00:00
Matt Arsenault 01c7f40de3 AMDGPU/GlobalISel: Legalize s1 source G_[SU]ITOFP
llvm-svn: 371952
2019-09-16 00:37:10 +00:00
Matt Arsenault 60169ed613 AMDGPU/GlobalISel: Set type on vgpr live in special arguments
Fixes assertion with workitem ID intrinsics used in non-kernel
functions.

llvm-svn: 371951
2019-09-16 00:33:00 +00:00
Matt Arsenault 9f52c1ea58 AMDGPU/GlobalISel: Select S16->S32 fptoint
llvm-svn: 371950
2019-09-16 00:32:56 +00:00
Matt Arsenault 0a6123595f AMDGPU/GlobalISel: Select s32->s16 G_[US]ITOFP
llvm-svn: 371949
2019-09-16 00:29:12 +00:00
Matt Arsenault f5d5cd205e AMDGPU/GlobalISel: Fix VALU s16 fneg
llvm-svn: 371948
2019-09-16 00:20:54 +00:00
Stefan Stipanovic 431141c5cc [Attributor] Heap-To-Stack Conversion
D53362 gives a prototype heap-to-stack conversion pass. With addition of new attributes in the attributor, this can now be revisted and improved. This will place it in the Attributor to make it easier to use new attributes (eg. nofree, nosync, willreturn, etc.) and other attributor features.

Reviewers: jdoerfert, uenoku, hfinkel, efriedma

Subscribers: lebedev.ri, xbolva00, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D65408

llvm-svn: 371942
2019-09-15 21:47:41 +00:00
Sanjay Patel 3daf168fa9 [InstCombine] remove unneeded one-use checks for icmp fold
This fold and several others were added in:
rL125734
...with no explanation for the one-use checks other than the code
comments about register pressure.

Given that this is IR canonicalization, we shouldn't be worried
about register pressure though; the backend should be able to
adjust for that as needed.

There are similar checks as noted with the TODO comments. I'm
hoping to remove those restrictions too, but if any of these
does cause a regression, it should be easier to correct by making
small, individual commits.

This is part of solving PR43310 the theoretically right way:
https://bugs.llvm.org/show_bug.cgi?id=43310
...ie, if we don't cripple basic transforms, then we won't
need to add special-case code to detect larger patterns.

llvm-svn: 371940
2019-09-15 20:56:34 +00:00
Sanjay Patel c77ad16f8e [InstCombine] add icmp tests with extra uses; NFC
llvm-svn: 371939
2019-09-15 20:13:27 +00:00
Jinsong Ji 07d824a7c3 [PowerPC][NFC] Add a testcase for fdiv expansion.
Pre-commit for following patch.

llvm-svn: 371938
2019-09-15 20:02:25 +00:00
David Green b325c05732 [ARM] Masked loads and stores
Masked loads and store fit naturally with MVE, the instructions being easily
predicated. This adds lowering for the simple cases of masked loads and stores.
It does not yet deal with widening/narrowing or pre/post inc, and so is
currently behind an option.

The llvm masked load intrinsic will accept a "passthru" value, dictating the
values used for the zero masked lanes. In MVE the instructions write 0 to the
zero predicated lanes, so we need to match a passthru that isn't 0 (or undef)
with a select instruction to pull in the correct data after the load.

Differential Revision: https://reviews.llvm.org/D67186

llvm-svn: 371932
2019-09-15 14:14:47 +00:00
Sanjay Patel b6a0faaa0c [SLP] limit vectorization of Constant subclasses (PR33958)
This is a fix for:
https://bugs.llvm.org/show_bug.cgi?id=33958

It seems universally true that we would not want to transform this kind of
sequence on any target, but if that's not correct, then we could view this
as a target-specific cost model problem. We could also white-list ConstantInt,
ConstantFP, etc. rather than blacklist Global and ConstantExpr.

Differential Revision: https://reviews.llvm.org/D67362

llvm-svn: 371931
2019-09-15 13:03:24 +00:00
David Green 06b309d527 [ARM] Simplify and update vmla test. NFC
llvm-svn: 371930
2019-09-15 11:53:05 +00:00
James Molloy a088b95f89 [CodeEmitter] Improve testing for APInt encoding
I missed Artem's comment in D67487 before committing.

Differential Revision: https://reviews.llvm.org/D67487

llvm-svn: 371929
2019-09-15 08:44:40 +00:00
James Molloy 60aadd19cb [CodeEmitter] Support instruction widths > 64 bits
Some VLIW instruction sets are Very Long Indeed. Using uint64_t constricts the Inst encoding to 64 bits (naturally).

This change switches CodeEmitter to a mode that uses APInts when Inst's bitwidth is > 64 bits (NFC for existing targets).

When Inst.BitWidth > 64 the prototype changes to:

  void TargetMCCodeEmitter::getBinaryCodeForInstr(const MCInst &MI,
                                                  SmallVectorImpl<MCFixup> &Fixups,
                                                  APInt &Inst,
                                                  APInt &Scratch,
                                                  const MCSubtargetInfo &STI);

The Inst parameter returns the encoded instruction, the Scratch parameter is used internally for manipulating operands and is exposed so that the underlying storage can be reused between calls to getBinaryCodeForInstr. The goal is to elide any APInt constructions that we can.

Similarly the operand encoding prototype changes to:

  getMachineOpValue(const MCInst &MI, const MCOperand &MO, APInt &op, SmallVectorImpl<MCFixup> &Fixups, const MCSubtargetInfo &STI);

That is, the operand is passed by reference as APInt rather than returned as uint64_t.

To reiterate, this APInt mode is enabled only when Inst.BitWidth > 64, so this change is NFC for existing targets.

llvm-svn: 371928
2019-09-15 08:35:08 +00:00
Simon Pilgrim b743e94cdc [TargetLowering] SimplifyDemandedBits - add EXTRACT_SUBVECTOR support.
Call SimplifyDemandedBits on the source vector.

llvm-svn: 371923
2019-09-14 16:38:26 +00:00
Roman Lebedev 9c5a4a4527 [InstSimplify] simplifyUnsignedRangeCheck(): handle few tautological cases (PR43251)
Summary:
This is split off from D67356, since these cases produce a constant,
no real need to keep them in instcombine.

Alive proofs:
https://rise4fun.com/Alive/u7Fk
https://rise4fun.com/Alive/4lV

https://bugs.llvm.org/show_bug.cgi?id=43251

Reviewers: spatel, nikic, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67498

llvm-svn: 371921
2019-09-14 13:47:27 +00:00
Johannes Doerfert e7c6f97039 [Attributor][Fix] Use right type to replace expressions
Summary: This should be obsolete once the functionality in D66967 is integrated.

Reviewers: uenoku, sstefan1

Subscribers: hiraditya, bollu, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67231

llvm-svn: 371915
2019-09-14 02:57:50 +00:00
Fangrui Song 2f519d7072 [llvm-objcopy] Ignore -B --binary-architecture=
GNU objcopy documents that -B is only useful with architecture-less
input (i.e. "binary" or "ihex"). After D67144, -O defaults to -I, and
-B is essentially a NOP.

* If -O is binary/ihex, GNU objcopy ignores -B.
* If -O is elf*, -B provides the e_machine field in GNU objcopy.

So to convert a blob to an ELF, `-I binary -B i386:x86-64 -O elf64-x86-64` has to be specified.

`-I binary -B i386:x86-64 -O elf64-x86-64` creates an ELF with its
e_machine field set to EM_NONE in GNU objcopy, but a regular x86_64 ELF
in elftoolchain elfcopy. Follow the elftoolchain approach (ignoring -B)
to simplify code. Users that expect their command line portable should
specify -B.

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D67215

llvm-svn: 371914
2019-09-14 01:36:31 +00:00
Fangrui Song ba53030dd0 [llvm-objcopy] Default --output-target to --input-target when unspecified
Fixes PR42171.

In GNU objcopy, if -O (--output-target) is not specified, the value is
copied from -I (--input-target).

```
objcopy -I binary -B i386:x86-64 a.txt b       # b is copied from a.txt
llvm-objcopy -I binary -B i386:x86-64 a.txt b  # b is an x86-64 object file
```

This patch changes our behavior to match GNU. With this change, we can
delete code related to -B handling (D67215).

Reviewed By: jakehehrlich

Differential Revision: https://reviews.llvm.org/D67144

llvm-svn: 371913
2019-09-14 01:36:16 +00:00
Fangrui Song 8a468031cd [llvm-ar] Uncapitalize error messages and delete full stop
Most GNU binutils don't append full stops in error messages. This
convention has been adopted by a bunch of LLVM binary utilities. Make
llvm-ar follow the convention as well.

Reviewed By: grimar

Differential Revision: https://reviews.llvm.org/D67558

llvm-svn: 371912
2019-09-14 01:18:47 +00:00
Michael Pozulp c45fd0cad4 [llvm-objcopy] Add support for response files in llvm-strip and llvm-objcopy
Summary: Addresses https://bugs.llvm.org/show_bug.cgi?id=42671

Reviewers: jhenderson, espindola, alexshap, rupprecht

Reviewed By: jhenderson

Subscribers: seiya, emaste, arichardson, jakehehrlich, MaskRay, abrachet, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65372

llvm-svn: 371911
2019-09-14 01:14:43 +00:00
Thomas Lively ae530c5c80 [WebAssembly] Narrowing and widening SIMD ops
Summary:
Implements target-specific LLVM intrinsics and clang builtins for
these new SIMD operations, as described at https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md#integer-to-integer-narrowing.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D67425

llvm-svn: 371906
2019-09-13 22:54:41 +00:00
Amara Emerson 02bcc86b08 [GlobalISel] Fix insertion point of new instructions to be after PHIs.
For some reason we sometimes insert new instructions one instruction before
the first non-PHI when legalizing. This can result in having non-PHI
instructions before PHIs, which mean that PHI elimination doesn't catch them.

Differential Revision: https://reviews.llvm.org/D67570

llvm-svn: 371901
2019-09-13 21:49:24 +00:00
Jessica Paquette 727328ab63 [AArch64][GlobalISel] Tail call memory intrinsics
Because memory intrinsics are handled differently than other calls, we need to
check them for tail call eligiblity in the legalizer. This allows us to still
inline them when it's beneficial to do so, but also tail call when possible.

This adds simple tail calling support for when the intrinsic is followed by a
return.

It ports the attribute checks from `TargetLowering::isInTailCallPosition` into
a similarly-named function in LegalizerHelper.cpp. The target-specific
`isUsedByReturnOnly` hook is not ported here.

Update tailcall-mem-intrinsics.ll to show that GlobalISel can now tail call
memory intrinsics.

Update legalize-memcpy-et-al.mir to have a case where we don't tail call.

Differential Revision: https://reviews.llvm.org/D67566

llvm-svn: 371893
2019-09-13 20:25:58 +00:00
Sanjay Patel 4ba6717c7e [SLP] add test for vectorization of constant expressions; NFC
Goes with D67362.

llvm-svn: 371879
2019-09-13 18:33:02 +00:00
Roman Lebedev 4cb267f9f5 [NFC][InstSimplify] Add some more tests for D67498/D67502
llvm-svn: 371877
2019-09-13 17:58:24 +00:00
Alexander Timofeev 9ff70132bf Revert for: [AMDGPU]: PHI Elimination hooks added for custom COPY insertion.
llvm-svn: 371873
2019-09-13 17:37:30 +00:00
Jessica Paquette 14bfb56b1a [AArch64][GlobalISel] Add support for sibcalling callees with varargs
This adds support for tail calling callees with varargs, equivalent to how it
is done in AArch64ISelLowering.

This only works for sibling calls, and does not add the necessary support for
musttail with varargs. (See r345641 for equivalent ISelLowering support.) This
should be implemented when we stop falling back on musttail.

Update call-translator-tail-call.ll to show that we can now tail call varargs.

Differential Revision: https://reviews.llvm.org/D67518

llvm-svn: 371868
2019-09-13 16:10:19 +00:00
George Rimar 8501102727 [yaml2obj/ObjectYAML] - Cleanup the error reporting API, add custom errors handlers.
This is a continuation of the YAML library error reporting
refactoring/improvement and the idea by itself was mentioned
in the following thread:
https://reviews.llvm.org/D67182?id=218714#inline-603404

This performs a cleanup of all object emitters in the library.
It allows using the custom one provided by the caller.

One of the nice things is that each tool can now print its tool name,
e.g: "yaml2obj: error: <text>"

Also, the code became a bit simpler.

Differential revision: https://reviews.llvm.org/D67445

llvm-svn: 371865
2019-09-13 16:00:16 +00:00
Jinsong Ji 455a0db01a [PowerPC][NFC] Move codegen tests to PowerPC from MIR/PowerPC
All tests with -run-pass !=none should not in MIR/, See MIR/README.

```
Tests for codegen passes should NOT be here but in
test/CodeGen/sometarget. As
a rule of thumb this directory should only contain tests using
'llc -run-pass none'.
```

llvm-svn: 371857
2019-09-13 14:18:36 +00:00
Sjoerd Meijer b55456aaa0 [AArch64] More @llvm.fma.f16 tests
Follow up of rL371321 that added FMA FP16 patterns. This adds more tests
for @llvm.fma.f16. This probably shows we miss one fmsub optimisation
opportunity, which I will look into.

llvm-svn: 371833
2019-09-13 09:44:13 +00:00
Sam Tebbs 1572b68509 [ARM] Add support for MVE vmaxv and vminv
This patch adds vecreduce_smax, vecredude_umax, vecreduce_smin, vecreduce_umin and selection for vmaxv and minv.

Differential Revision: https://reviews.llvm.org/D66413

llvm-svn: 371827
2019-09-13 09:11:46 +00:00
George Rimar d706908339 [llvm-objdump] Fix llvm-objdump --all-headers output order
Patch by Justice Adams!

Made llvm-objdump --all-headers output match the order of GNU objdump for compatibility reasons.

Old order of the headers output:
* file header
* section header table
* symbol table
* program header table
* dynamic section

New order of the headers output (GNU compatible):
* file header information
* program header table
* dynamic section
* section header table
* symbol table

(Relevant BugZilla Bug: https://bugs.llvm.org/show_bug.cgi?id=41830)

Differential revision: https://reviews.llvm.org/D67357

llvm-svn: 371826
2019-09-13 08:56:28 +00:00
Dmitri Gribenko 8a4595199a Revert "Fix test failures after r371640"
This reverts commit r371645, because r371640 was reverted.

llvm-svn: 371824
2019-09-13 08:26:59 +00:00
Matt Arsenault a4be3eff5c AMDGPU/GlobalISel: Legalize s32->s16 G_SITOFP/G_UITOFP
llvm-svn: 371811
2019-09-13 04:04:55 +00:00
Shiva Chen a49a16ddd0 [RISCV] Support stack offset exceed 32-bit for RV64
Differential Revision: https://reviews.llvm.org/D61884

llvm-svn: 371810
2019-09-13 04:03:32 +00:00
Shiva Chen ea530ba3ed Revert "[RISCV] Support stack offset exceed 32-bit for RV64"
This reverts commit 1c340c62058d4115d21e5fa1ce3a0d094d28c792.

llvm-svn: 371809
2019-09-13 04:03:24 +00:00
Matt Arsenault 67d9349dad AMDGPU/GlobalISel: Fix RegBankSelect for amdgcn.else
llvm-svn: 371808
2019-09-13 03:55:49 +00:00
Matt Arsenault 638f802381 AMDGPU/GlobalISel: Select 16-bit VALU bit ops
llvm-svn: 371807
2019-09-13 03:55:43 +00:00
Shiva Chen eaa230fe3c [RISCV] Support stack offset exceed 32-bit for RV64
Differential Revision: https://reviews.llvm.org/D61884

llvm-svn: 371806
2019-09-13 02:50:13 +00:00
Matt Arsenault f457dd2bd4 AMDGPU/GlobalISel: Legalize G_FFLOOR
llvm-svn: 371803
2019-09-13 01:48:15 +00:00
Tim Shen a31c521f5e Temporarily revert r371640 "LiveIntervals: Split live intervals on multiple dead defs".
It reveals a miscompile on Hexagon. See PR43302 for details.

llvm-svn: 371802
2019-09-13 01:34:25 +00:00
Matt Arsenault 4d33918034 AMDGPU/GlobalISel: Legalize G_FMAD
Unlike SelectionDAG, treat this as a normally legalizable operation.
In SelectionDAG this is supposed to only ever formed if it's legal,
but I've found that to be restricting. For AMDGPU this is contextually
legal depending on whether denormal flushing is allowed in the use
function.

Technically we currently treat the denormal mode as a subtarget
feature, so custom lowering could be avoided. However I consider this
to be a defect, and this should be contextually dependent on the
controllable rounding mode of the parent function.

llvm-svn: 371800
2019-09-13 00:44:35 +00:00
Matt Arsenault 4a73c6eada AMDGPU/GlobalISel: Select G_CTPOP
llvm-svn: 371798
2019-09-13 00:11:20 +00:00
Matt Arsenault b85c8c4bbd LiveIntervals: Remove assertion
This testcase is invalid, and caught by the verifier. For the verifier
to catch it, the live interval computation needs to complete. Remove
the assert so the verifier catches this, which is less confusing.

In this testcase there is an undefined use of a subregister, and lanes
which aren't used or defined. An equivalent testcase with the
super-register shrunk to have no untouched lanes already hit this
verifier error.

llvm-svn: 371792
2019-09-12 23:46:51 +00:00
Matt Arsenault 8382ce5f1b AMDGPU: Inline constant when materalizing FI with add on gfx9
This was relying on the SGPR usable for the carry out clobber to also
be used for the input. There was no carry out on gfx9. With no carry
out clobber to worry about, so the literal can just be directly used
with a VOP2 add.

llvm-svn: 371791
2019-09-12 23:46:46 +00:00
Philip Reames 4a8916cf1a [Test] Restructure check lines to show differences between modes more clearly
With the landing of the previous patch (in particular D66318) there are a lot fewer diffs now.  I added an experimental O0 line, and updated all the tests to group experimental and non-experimental O0/O3 together.

Skimming the remaining diffs, there's only a few which are obviously incorrect.  There's a large number which are questionable, so more todo.

llvm-svn: 371790
2019-09-12 23:22:37 +00:00
Jessica Paquette 0c283cb504 [AArch64][GlobalISel] Support tail calling with swiftself parameters
Swiftself uses a callee-saved register. We can tail call when the register used
in the caller and callee is the same.

This behaviour is equivalent to that in `TargetLowering::parametersInCSRMatch`.

Update call-translator-tail-call.ll to verify that we can do this. When we
support inline assembly, we can write a check similar to the one in the
general swiftself.ll. For now, we need to verify that we get the correct COPY
instruction after call lowering.

Differential Revision: https://reviews.llvm.org/D67511

llvm-svn: 371788
2019-09-12 23:00:59 +00:00
Philip Reames 079e210463 [SDAG] Update generic code to conservatively check for isAtomic in addition to isVolatile
This is the first sweep of generic code to add isAtomic bailouts where appropriate. The intention here is to have the switch from AtomicSDNode to LoadSDNode/StoreSDNode be close to NFC; that is, I'm not looking to allow additional optimizations at this time. That will come later.  See D66309 for context.

Differential Revision: https://reviews.llvm.org/D66318

llvm-svn: 371786
2019-09-12 22:49:17 +00:00
Jessica Paquette a42070a6aa [AArch64][GlobalISel] Support sibling calls with outgoing arguments
This adds support for lowering sibling calls with outgoing arguments.

e.g

```
define void @foo(i32 %a)
```

Support is ported from AArch64ISelLowering's `isEligibleForTailCallOptimization`.
The only thing that is missing is a full port of
`TargetLowering::parametersInCSRMatch`. So, if we're using swiftself,
we'll never tail call.

- Rename `analyzeCallResult` to `analyzeArgInfo`, since the function is now used
  for both outgoing and incoming arguments
- Teach `OutgoingArgHandler` about tail calls. Tail calls use frame indices for
  stack arguments.
- Teach `lowerFormalArguments` to set the bytes in the caller's stack argument
  area. This is used later to check if the tail call's parameters will fit on
  the caller's stack.
- Add `areCalleeOutgoingArgsTailCallable` to perform the eligibility check on
  the callee's outgoing arguments.

For testing:

- Update call-translator-tail-call to verify that we can now tail call with
  outgoing arguments, use G_FRAME_INDEX for stack arguments, and respect the
  size of the caller's stack
- Remove GISel-specific check lines from speculation-hardening.ll, since GISel
  now tail calls like the other selectors
- Add a GISel test line to tailcall-string-rvo.ll since we can tail call in that
  test now
- Add a GISel test line to tailcall_misched_graph.ll since we tail call there
  now. Add specific check lines for GISel, since the debug output from the
  machine-scheduler differs with GlobalISel. The dependency still holds, but
  the output comes out in a different order.

Differential Revision: https://reviews.llvm.org/D67471

llvm-svn: 371780
2019-09-12 22:10:36 +00:00
Craig Topper 36e04d14e9 [PowerPC] Remove the SPE4RC register class and instead add f32 to the GPRC register class.
Summary:
Since the SPE4RC register class contains an identical set of registers
and an identical spill size to the GPRC class its slightly confusing
the tablegen emitter. It's preventing the GPRC_and_GPRC_NOR0 synthesized
register class from inheriting VTs and AltOrders from GPRC or GPRC_NOR0.
This is because SPE4C is found first in the super register class list
when inheriting these properties and it doesn't set the VTs or
AltOrders the same way as GPRC or GPRC_NOR0.

This patch replaces all uses of GPE4RC with GPRC and allows GPRC and
GPRC_NOR0 to contain f32.

The test changes here are because the AltOrders are being inherited
to GPRC_NOR0 now.

Found while trying to determine if getCommonSubClass needs to take
a VT argument. It was originally added to support fp128 on x86-64,
I've changed some things about that so that it might be needed
anymore. But a PowerPC test crashed without it and I think its
due to this subclass issue.

Reviewers: jhibbits, nemanjai, kbarton, hfinkel

Subscribers: wuzish, nemanjai, mehdi_amini, hiraditya, kbarton, MaskRay, dexonsmith, jsji, shchenz, steven.zhang, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67513

llvm-svn: 371779
2019-09-12 22:07:35 +00:00
Philip Reames 0e8d5085ac Remove a duplicate test
Turns out I'd already added exactly the same test under the name non_unit_stride.

llvm-svn: 371777
2019-09-12 21:40:15 +00:00
Philip Reames bdf608477e [SCEV] Add smin support to getRangeRef
We were failing to compute trip counts (both exact and maximum) for any loop which involved a comparison against either an umin or smin. It looks like this simply got missed when we added smin/umin to SCEV.  (Note: umin was submitted separately earlier today.  Turned out two folks hit this at the same time.)

Differential Revision: https://reviews.llvm.org/D67514

llvm-svn: 371776
2019-09-12 21:32:27 +00:00
Craig Topper efe6724b9f [DAGCombiner][X86] Pass the CmpOpVT to reduceSelectOfFPConstantLoads so X86 can exclude fp128 compares.
The X86 decision assumes the compare will produce a result in an XMM
register, but that can't happen for an fp128 compare since those
go to a libcall the returns an i32. Pass the VT so X86 can check
the type.

llvm-svn: 371775
2019-09-12 21:30:18 +00:00
Evandro Menezes 08df6e64d5 [ConstantFolding] Expand folding of some library functions
Expanding the folding of `nearbyint()`, `rint()` and `trunc()` to library
functions, in addition to the current support for intrinsics.

Differential revision: https://reviews.llvm.org/D67468

llvm-svn: 371774
2019-09-12 21:23:22 +00:00
Tim Shen 396d0e1635 Fix llvm-reduce tests so that they don't assume the source code is
writable.

Instead of copying over the original file permissions, just create
a new file and add the executable bit.

llvm-svn: 371772
2019-09-12 21:03:49 +00:00
Florian Hahn 0741810077 [LV] Update test case after r371768.
llvm-svn: 371769
2019-09-12 20:07:17 +00:00
Florian Hahn a31ee37624 [SCEV] Support SCEVUMinExpr in getRangeRef.
This patch adds support for SCEVUMinExpr to getRangeRef,
similar to the support for SCEVUMaxExpr.

Reviewers: sanjoy.google, efriedma, reames, nikic

Reviewed By: sanjoy.google

Differential Revision: https://reviews.llvm.org/D67177

llvm-svn: 371768
2019-09-12 20:03:32 +00:00
David Blaikie 6be90ac788 llvm-reduce: For now, mark these tests as requiring a shell
(since they execute shell scripts/that's the only entry point at the
moment)

llvm-svn: 371764
2019-09-12 19:50:54 +00:00
Philip Reames a3d2737520 Precommit tests for D67514
llvm-svn: 371762
2019-09-12 19:34:27 +00:00
David Blaikie 890f17c256 llvm-reduce: Remove unused plugin support/requirements
llvm-svn: 371755
2019-09-12 18:52:31 +00:00
Alina Sbirlea 18f5204db4 [LICM/AST] Check if the AliasAny set is removed from the tracker.
Summary:
Resolves PR38513.
Credit to @bjope for debugging this.

Reviewers: hfinkel, uabelho, bjope

Subscribers: sanjoy.google, bjope, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67417

llvm-svn: 371752
2019-09-12 18:09:47 +00:00
Sanjay Patel 458c2759b1 [InstCombine] add tests for fptrunc; NFC
llvm-svn: 371750
2019-09-12 18:00:11 +00:00
Alina Sbirlea 6943472d45 [MemorySSA] Pass (for update) MSSAU when hoisting instructions.
Summary: Pass MSSAU to makeLoopInvariant in order to properly update MSSA.

Reviewers: george.burgess.iv

Subscribers: Prazek, sanjoy.google, uabelho, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67470

llvm-svn: 371748
2019-09-12 17:12:51 +00:00
Philip Reames e0cab70718 Precommit tests for generalization of load dereferenceability in loop
llvm-svn: 371747
2019-09-12 17:09:01 +00:00
Sanjay Patel 62ad62fb98 [InstCombine] reduce test noise and regenerate CHECK lines; NFC
llvm-svn: 371746
2019-09-12 17:07:01 +00:00
Philip Reames b90f94f42e [LV] Support invariant addresses in speculation logic
Implement a TODO from rL371452, and handle loop invariant addresses in predicated blocks. If we can prove that the load is safe to speculate into the header, then we can avoid using a masked.load in favour of a normal load.

This is mostly about vectorization robustness. In the common case, it's generally expected that LICM/LoadStorePromotion would have eliminated such loads entirely.

Differential Revision: https://reviews.llvm.org/D67372

llvm-svn: 371745
2019-09-12 16:49:10 +00:00
David Green a6e944b173 [CGP] Ensure sinking multiple instructions does not invalidate dominance checks
In MVE, as of rL371218, we are attempting to sink chains of instructions such as:
  %l1 = insertelement <8 x i8> undef, i8 %l0, i32 0
  %broadcast.splat26 = shufflevector <8 x i8> %l1, <8 x i8> undef, <8 x i32> zeroinitializer
In certain situations though, we can end up breaking the dominance relations of
instructions. This happens when we sink the instruction into a loop, but cannot
remove the originals. The Use is updated, which might in fact be a Use from the
second instruction to the first.

This attempts to fix that by reversing the order of instruction that are sunk,
and ensuring that we update the uses on new instructions if they have already
been sunk, not the old ones.

Differential Revision: https://reviews.llvm.org/D67366

llvm-svn: 371743
2019-09-12 16:00:07 +00:00
Roman Lebedev 80a8a85758 [InstCombine][InstSimplify] Move constant-folding tests in result-of-usub-is-non-zero-and-no-overflow.ll
llvm-svn: 371737
2019-09-12 14:12:31 +00:00
Roman Lebedev b3e0937f0a [NFC][InstCombine][InstSimplify] Add test for "add-of-negative is non-zero and no overflow" (PR43259)
https://rise4fun.com/Alive/ska
https://rise4fun.com/Alive/9iX

https://bugs.llvm.org/show_bug.cgi?id=43259

llvm-svn: 371736
2019-09-12 14:12:20 +00:00
Sanjay Patel 3f5a808365 [ConstProp] allow folding for fma that produces NaN
Folding for fma/fmuladd was added here:
rL202914
...and as seen in existing/unchanged tests, that works to propagate NaN
if it's already an input, but we should fold an fma() that creates NaN too.

From IEEE-754-2008 7.2 "Invalid Operation", there are 2 clauses that apply
to fma, so I added tests for those patterns:

  c) fusedMultiplyAdd: fusedMultiplyAdd(0, ∞, c) or fusedMultiplyAdd(∞, 0, c)
     unless c is a quiet NaN; if c is a quiet NaN then it is implementation
     defined whether the invalid operation exception is signaled
  d) addition or subtraction or fusedMultiplyAdd: magnitude subtraction of
     infinities, such as: addition(+∞, −∞)

Differential Revision: https://reviews.llvm.org/D67446

llvm-svn: 371735
2019-09-12 14:10:50 +00:00
Petar Avramovic ff6ac1eb5f [MIPS GlobalISel] Select indirect branch
Select G_BRINDIRECT for MIPS32.

Differential Revision: https://reviews.llvm.org/D67441

llvm-svn: 371730
2019-09-12 11:44:36 +00:00
Petar Avramovic 646e1f7b7f [MIPS GlobalISel] Lower G_DYN_STACKALLOC
IRTranslator creates G_DYN_STACKALLOC instruction during expansion of
alloca when argument that tells number of elements to allocate on stack
is a virtual register. Use default lowering for MIPS32.

Differential Revision: https://reviews.llvm.org/D67440

llvm-svn: 371728
2019-09-12 11:39:50 +00:00
Petar Avramovic 75e43a607c [MIPS GlobalISel] Select G_IMPLICIT_DEF
G_IMPLICIT_DEF is used for both integer and floating point implicit-def.
Handle G_IMPLICIT_DEF as ambiguous opcode in MipsRegisterBankInfo.
Select G_IMPLICIT_DEF for MIPS32.

Differential Revision: https://reviews.llvm.org/D67439

llvm-svn: 371727
2019-09-12 11:32:38 +00:00
Tim Northover f1c2892912 AArch64: support arm64_32, an ILP32 slice for watchOS.
This is the main CodeGen patch to support the arm64_32 watchOS ABI in LLVM.
FastISel is mostly disabled for now since it would generate incorrect code for
ILP32.

llvm-svn: 371722
2019-09-12 10:22:23 +00:00
Roman Lebedev f1286621eb [InstSimplify] simplifyUnsignedRangeCheck(): handle more cases (PR43251)
Summary:
I don't have a direct motivational case for this,
but it would be good to have this for completeness/symmetry.

This pattern is basically the motivational pattern from
https://bugs.llvm.org/show_bug.cgi?id=43251
but with different predicate that requires that the offset is non-zero.

The completeness bit comes from the fact that a similar pattern (offset != zero)
will be needed for https://bugs.llvm.org/show_bug.cgi?id=43259,
so it'd seem to be good to not overlook very similar patterns..

Proofs: https://rise4fun.com/Alive/21b

Also, there is something odd with `isKnownNonZero()`, if the non-zero
knowledge was specified as an assumption, it didn't pick it up (PR43267)

Reviewers: spatel, nikic, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67411

llvm-svn: 371718
2019-09-12 09:26:17 +00:00
Kai Luo cfaf2b6cfa [PowerPC][MCP][NFC] Pre-commit test cases for https://reviews.llvm.org/D65267
llvm-svn: 371717
2019-09-12 09:00:44 +00:00
Qiu Chaofan b7fb5d0f6f [DAGCombiner] Improve division estimation of floating points.
Current implementation of estimating divisions loses precision since it
estimates reciprocal first and does multiplication.  This patch is to re-order
arithmetic operations in the last iteration in DAGCombiner to improve the
accuracy.

Reviewed By: Sanjay Patel, Jinsong Ji

Differential Revision: https://reviews.llvm.org/D66050

llvm-svn: 371713
2019-09-12 07:51:24 +00:00
David Blaikie 5adb3d2ac0 Reapply llvm-reduce: Add pass to reduce parameters""
Fixing a couple of asan-identified bugs
* use of an invalid "Use" iterator after the element was removed
* use of StringRef to Function name after the Function was erased

This reapplies r371567, which was reverted in r371580.

llvm-svn: 371700
2019-09-12 01:20:48 +00:00
David Blaikie aaef97a55e PR43278: llvm-reduce: Use temporary file names (and ToolOutputFile) rather than unique ones - to ensure they're cleaned up
This modifies the tool somewhat to only create files when about to run
the "interestingness" test, and delete them immediately after - this
means some more files will be created sometimes (when "double checking"
work - which should probably be fixed/avoided anyway).

This now creates temporary files, rather than only unique ones, and also
uses ToolOutputFile (without ever calling "keep") to ensure the files
are deleted as soon as the interestingness test is run.

llvm-svn: 371696
2019-09-12 00:31:57 +00:00
Craig Topper 635d383fad [X86] Enable -mprefer-vector-width=256 by default for Skylake-avx512 and later Intel CPUs.
AVX512 instructions can cause a frequency drop on these CPUs. This
can negate the performance gains from using wider vectors. Enabling
prefer-vector-width=256 will prevent generation of zmm registers
unless explicit 512 bit operations are used in the original source
code.

I believe gcc and icc both do something similar to this by default.

Differential Revision: https://reviews.llvm.org/D67259

llvm-svn: 371694
2019-09-11 23:54:36 +00:00
Amara Emerson 55d86f04c7 [AArch64][GlobalISel] Fall back on attempts to allocate split types on the stack.
First we were asserting that the ValNo of a VA was the wrong value. It doesn't actually
make a difference for us in CallLowering but fix that anyway to silence the assert.

The bigger issue was that after fixing the assert we were generating invalid MIR
because the merging/unmerging of values split across multiple registers wasn't
also implemented for memory locs. This happens when we run out of registers and
have to pass the split types like i128 -> i64 x 2 on the stack. This is do-able, but
for now just fall back.

llvm-svn: 371693
2019-09-11 23:53:23 +00:00
Jessica Paquette e297ad1bd9 [GlobalISel][AArch64] Check caller for swifterror params in tailcall eligibility
Before, we only checked the callee for swifterror. However, we should also be
checking the caller to see if it has a swifterror parameter.

Since we don't currently handle outgoing arguments, this didn't show up in the
swifterror.ll testcase.

Also, remove the swifterror checks from call-translator-tail-call.ll, since
they are covered by the existing swifterror testing. Better to have it all in
one place.

Differential Revision: https://reviews.llvm.org/D67465

llvm-svn: 371692
2019-09-11 23:44:16 +00:00
David Blaikie d79cc14822 PR43278: Temporarily disable llvm-reduce tests due to exhausting temp files
llvm-svn: 371679
2019-09-11 22:15:16 +00:00
Reid Kleckner ff45955fc8 [X86] Fix latent bugs in 32-bit CMPXCHG8B inserter
I found three issues:
1. the loop over E[ABCD]X copies run over BB start
2. the direct address of cmpxchg8b could be a frame index
3. the displacement of cmpxchg8b could be a global instead of an
   immediate

These were all introduced together in r287875, and should be fixed with
this change.

Issue reported by Zachary Turner.

llvm-svn: 371678
2019-09-11 21:56:17 +00:00
Cyndy Ishida bc40836a43 Revert [llvm-nm] Add tapi file support
This reverts r371576 (git commit f88f46358d)

llvm-svn: 371676
2019-09-11 21:35:28 +00:00
Cyndy Ishida aeeb9e3895 Revert [Object][TextAPI] NFC, fix tapi lit tests
This reverts r371577 (git commit b2b0ccab2f)

llvm-svn: 371674
2019-09-11 21:32:55 +00:00
Craig Topper 5278b0a04e [X86] Add test case for v16i64->v16i32 truncate on min-legal-vector-width=256.
I think this case would crash before I added back the -x86-experimental-vector-widening command line option. Adding this test case to prevent breaking it again when we remove the option.

llvm-svn: 371673
2019-09-11 21:30:42 +00:00
Craig Topper 08474ca091 [X86] Move x86_64 fp128 conversion to libcalls from type legalization to DAG legalization
fp128 is considered a legal type for a register, but has almost no legal operations so everything needs to be converted to a libcall. Previously this was implemented by tricking type legalization into softening the operations with various checks for "is legal in hardware register" to change the behavior to still use f128 as the resulting type instead of converting to i128.

This patch abandons this approach and instead moves the libcall conversions to LegalizeDAG. This is the approach taken by AArch64 where they also have a legal fp128 type, but no legal operations. I think this is more in spirit with how SelectionDAG's phases are supposed to work.

I had to make some hacks for STRICT_FP_ROUND because some of the strict FP handling checks if ISD::FP_ROUND is Legal for a given result type, but I had to make ISD::FP_ROUND Custom to allow making a libcall when the input is f128. For all other types the Custom handler just returns the original node. These hacks are incomplete and don't work for a strict truncate from f128, but I don't think it worked before either since LegalizeFloatTypes doesn't know about strict ops yet. I've also raised PR43209 against AArch64 which currently crashes on a strict ftrunc from f64->f32 because of FP_ROUND being marked Custom for the same reason there.

Differential Revision: https://reviews.llvm.org/D67128

llvm-svn: 371672
2019-09-11 21:30:09 +00:00
Austin Kerbow 666af6714c AMDGPU: Move m0 initializations earlier
Summary:
After hoisting and merging m0 initializations schedule them as early as
possible in the MBB. This helps the scheduler avoid hazards in some
cases.

Reviewers: rampitec, arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67450

llvm-svn: 371671
2019-09-11 21:28:41 +00:00
Vedant Kumar 0b91333d59 [DWARF] Emit call site parameter info when tuning for lldb
Emit debug entry values using standard DWARF5 opcodes when the debugger
tuning is set to lldb.

Differential Revision: https://reviews.llvm.org/D67410

llvm-svn: 371666
2019-09-11 21:23:39 +00:00
Michael Liao 7957d4c015 [AMDGPU] Fix crash in phi-elimination hook.
Summary: - Pre-check in case there's just a single PHI insn.

Reviewers: alex-t, rampitec, arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, dstuttard, tpr, t-tye, hiraditya, llvm-commits, yaxunl

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67451

llvm-svn: 371649
2019-09-11 19:55:20 +00:00
Matt Arsenault f5c3bb60b3 Fix test failures after r371640
r371640 evidently fixed bug 39481

llvm-svn: 371645
2019-09-11 18:55:20 +00:00