Commit Graph

72467 Commits

Author SHA1 Message Date
David Green 8532b2ee89 [ARM] MVE VCVT lowering for f16->f32 extends
This adds code to lower f16 to f32 fp_exts's using an MVE VCVT
instructions, similar to a recent similar patch for fp_trunc. Again it
goes through the lowering of a BUILD_VECTOR, but is slightly simpler
only having to deal with interleaved indices. It adds a VCVTL node to
lower to, similar to VCVTN.

Differential Revision: https://reviews.llvm.org/D81339
2020-06-25 20:54:26 +01:00
Craig Topper 6673d69226 [X86] Don't imply -mprfchw when -m3dnow is specified. Enable prefetchw in the backend with 3dnow feature.
The PREFETCHW instruction was originally part of the 3DNow. But
it was given its own CPUID bit on later CPUs just before 3DNow
was deprecated.

We were setting the -mprfchw flag if -m3dnow was passed or the CPU
supported 3dnow unless -mno-prfchw was passed. But -march=native
on a CPU without the PRFCHW CPUID bit set will pass -mno-prfchw.
So -march=k8 will behave differently than -march=native on a K8
for example.

So remove this implicit setting from the frontend and instead
enable the backend to use PREFETCHW if 3dnow OR prfchw is enabled.

Also enable PRFCHW flag on amdfam10/barcelona which seems to be
where this CPUID bit was introduced. That CPU also supported
3dnow.
2020-06-25 12:46:52 -07:00
Craig Topper 01c18f9199 Revert "[X86] Don't imply -mprfchw when -m3dnow is specified. Enable prefetchw in the backend with 3dnow feature."
This is failing on the bots.

This reverts commit 636d31a5c3.
2020-06-25 11:43:02 -07:00
David Green 0bfb4c2506 [ARM] Add FP_ROUND handling to splitting MVE stores
This splits MVE vector stores of a fp_trunc in the same way that we do
for standard trunc's. It extends PerformSplittingToNarrowingStores to
handle fp_round, splitting the store into pieces and adding a VCVTNb to
perform the actual fp_round. The actual store is then converted to an
integer store so that it can truncate bottom lanes of the result.

Differential Revision: https://reviews.llvm.org/D81141
2020-06-25 19:37:15 +01:00
Craig Topper 636d31a5c3 [X86] Don't imply -mprfchw when -m3dnow is specified. Enable prefetchw in the backend with 3dnow feature.
The PREFETCHW instruction was originally part of the 3DNow. But
it was given its own CPUID bit on later CPUs just before 3DNow
was deprecated.

We were setting the -mprfchw flag if -m3dnow was passed or the CPU
supported 3dnow unless -mno-prfchw was passed. But -march=native
on a CPU without the PRFCHW CPUID bit set will pass -mno-prfchw.
So -march=k8 will behave differently than -march=native on a K8
for example.

So remove this implicit setting from the frontend and instead
enable the backend to use PREFETCHW if 3dnow OR prfchw is enabled.

Also enable PRFCHW flag on amdfam10/barcelona which seems to be
where this CPUID bit was introduced. That CPU also supported
3dnow.
2020-06-25 11:25:35 -07:00
Hiroshi Yamauchi 9878996c70 Revert "[PGO] Extend the value profile buckets for mem op sizes."
This reverts commit 63a89693f0.

Due to a build failure like http://lab.llvm.org:8011/builders/sanitizer-windows/builds/65386/steps/annotate/logs/stdio
2020-06-25 11:13:49 -07:00
Kirill Naumov d48c7859fb [InlineCost] GetElementPtr with constant operands
If the GEP instruction contanins only constants as its arguments,
then it should be recognized as a constant. For now, there was
also added a flag to turn off this simplification if it causes
any regressions ("disable-gep-const-evaluation") which is off
by default. Once I gather needed data of the effectiveness of
this simplification, the flag will be deleted.

Reviewers: apilipenko, davidxl, mtrofin

Reviewed By: mtrofin

Differential Revision: https://reviews.llvm.org/D81026
2020-06-25 18:09:51 +00:00
Hiroshi Yamauchi 63a89693f0 [PGO] Extend the value profile buckets for mem op sizes.
Extend the memop value profile buckets to be more flexible (could accommodate a
mix of individual values and ranges) and to cover more value ranges (from 11 to
22 buckets).

Disabled behind a flag (to be enabled separately) and the existing code to be
removed later.

Differential Revision: https://reviews.llvm.org/D81682
2020-06-25 10:22:56 -07:00
Zequan Wu 79d7e9c7d0 [llvm-readobj][COFF] add .llvm.call-graph-profile section dump
Summary: Dumping contents of `.llvm.call-graph-profile` section of COFF in the same format as ELF.

Reviewers: jhenderson, MaskRay, hans

Reviewed By: jhenderson

Subscribers: grimar, rupprecht, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81894
2020-06-25 09:52:49 -07:00
Francesco Petrogalli 7200fa38a9 [sve][acle] Add some C intrinsics for brain float types.
Summary:
The following intrinsics has been added:

svuint16_t svcnt[_bf16]_m(svuint16_t inactive, svbool_t pg, svbfloat16_t op)
svuint16_t svcnt[_bf16]_x(svbool_t pg, svbfloat16_t op)
svuint16_t svcnt[_bf16]_z(svbool_t pg, svbfloat16_t op)

svbfloat16_t svtbl[_bf16](svbfloat16_t data, svuint16_t indices)

svbfloat16_t svtbl2[_bf16](svbfloat16x2_t data, svuint16_t indices)

svbfloat16_t svtbx[_bf16](svbfloat16_t fallback, svbfloat16_t data, svuint16_t indices)

Reviewers: c-rhodes, kmclaughlin, efriedma, sdesmalen, ctetreau

Subscribers: tschuett, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D82429
2020-06-25 16:31:01 +00:00
Sanjay Patel c9e8c9e3ea [InstCombine] fold fmul/fdiv with fabs operands
fabs(X) * fabs(Y) --> fabs(X * Y)
fabs(X) / fabs(Y) --> fabs(X / Y)

If both operands of fmul/fdiv are positive, then the result must be positive.

There's a NAN corner-case that prevents removing the more specific fold just
above this one:
fabs(X) * fabs(X) -> X * X
That fold works even with NAN because the sign-bit result of the multiply is
not specified if X is NAN.

We can't remove that and use the more general fold that is proposed here
because once we convert to this:
fabs (X * X)
...it is not legal to simplify the 'fabs' out of that expression when X is NAN.
That's because fabs() guarantees that the sign-bit is always cleared - even
for NAN values.

So this patch has the potential to lose information, but it seems unlikely if
we do the more specific fold ahead of this one.

Differential Revision: https://reviews.llvm.org/D82277
2020-06-25 11:35:38 -04:00
David Green 3cb2190b0b [ARM] MVE VCVT lowering for f32->f16 truncs
This adds code to lower f32 to f16 fp_trunc's using a pair of MVE VCVT
instructions. Due to v4f16 not being legal, fp_round are often split up
fairly early. So this reconstructs the vcvt's from a buildvector of
fp_rounds from two vector inputs. Something like:

BUILDVECTOR(FP_ROUND(EXTRACT_ELT(X, 0),
            FP_ROUND(EXTRACT_ELT(Y, 0),
            FP_ROUND(EXTRACT_ELT(X, 1),
            FP_ROUND(EXTRACT_ELT(Y, 1), ...)

It adds a VCVTN node to handle this, which like VMOVN or VQMOVN lowers
into the top/bottom lanes of an MVE instruction.

Differential Revision: https://reviews.llvm.org/D81139
2020-06-25 15:59:36 +01:00
Victor Campos da852b03b0 [AArch64] Emit warning when disassembling unpredictable LDRAA and LDRAB
Summary:
LDRAA and LDRAB in their writeback variant should softfail when the same
register is used as result and base.

This patch adds a custom decoder that catches such case and emits a
warning when it occurs.

Differential Revision: https://reviews.llvm.org/D82541
2020-06-25 15:56:36 +01:00
Thomas Preud'homme 6c67ee0f58 [MC] Fix PR45805: infinite recursion in assembler
Give up folding an expression if the fragment of one of the operands
would require laying out a fragment already being laid out. This
prevents hitting an infinite recursion when a fill size expression
refers to a later fragment since computing the offset of that fragment
would require laying out the fill fragment and thus computing its size
expression.

Reviewed By: echristo

Differential Revision: https://reviews.llvm.org/D79570
2020-06-25 15:42:36 +01:00
Sanjay Patel c336f21af5 [PhaseOrdering] delete test for vectorization; NFC
As requested in D81416, I'm deleting the file that I added with:
rGdf79443
2020-06-25 09:34:11 -04:00
Florian Hahn 4837daf883 [DSE,MSSA] Check if Def is removable only wen we try to remove it.
Non-removable MemoryDefs can still eliminate other defs. Update the
isRemovable checks to only candidates for removal.
2020-06-25 14:01:10 +01:00
David Green f14457f5d8 [ARM] Split cast cost tests, and add masked load/store tests. NFC
This file has grown quite large and could do with being split up. This
splits away the load/store + cast tests into a separate file. Some
masked load/store + cast tests have been added too, along with some
extra load/store + fpcast tests.
2020-06-25 13:24:17 +01:00
Georgii Rymar 03b902752e [llvm-readelf] - Report a warning instead of an error when dumping a broken section header.
There is no reason to report an error in `printSectionHeaders()`, we can report
a warning and continue dumping. This is what the patch does.

Differential revision: https://reviews.llvm.org/D82462
2020-06-25 14:38:06 +03:00
Tyker c95ffadb24 [AssumeBundles] Use operand bundles to encode alignment assumptions
Summary:
NOTE: There is a mailing list discussion on this: http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html

Complemantary to the assumption outliner prototype in D71692, this patch
shows how we could simplify the code emitted for an alignemnt
assumption. The generated code is smaller, less fragile, and it makes it
easier to recognize the additional use as a "assumption use".

As mentioned in D71692 and on the mailing list, we could adopt this
scheme, and similar schemes for other patterns, without adopting the
assumption outlining.

Reviewers: hfinkel, xbolva00, lebedev.ri, nikic, rjmccall, spatel, jdoerfert, sstefan1

Reviewed By: jdoerfert

Subscribers: yamauchi, kuter, fhahn, merge_guards_bot, hiraditya, bollu, rkruppe, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71739
2020-06-25 12:59:44 +02:00
Tyker 8938a6c9ed [NFC] update test to make diff of the following commit clear 2020-06-25 12:59:44 +02:00
Sam Tebbs 187f627a50 [ARM] Allow tail predication on sadd_sat and uadd_sat intrinsics
This patch stops the sadd_sat and uadd_sat intrinsics from blocking tail predication.

Differential revision: https://reviews.llvm.org/D82377
2020-06-25 11:54:29 +01:00
Shawn Landden de9f842c55 [PowerPC] add popcount CodeGen test; NFC 2020-06-25 12:41:33 +04:00
Piotr Sobczak 0045786f14 [AMDGPU] Select s_cselect
Summary:
Add patterns to select s_cselect in the isel.

Handle more cases of implicit SCC accesses in si-fix-sgpr-copies
to allow new patterns to work.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, asbirlea, kerbowa, llvm-commits

Tags: #llvm

Re-commit D81925 with a bugfix D82370.

Differential Revision: https://reviews.llvm.org/D81925
Differential Revision: https://reviews.llvm.org/D82370
2020-06-25 10:38:23 +02:00
David Sherwood ee26a31e7b [SVE] Make ConstantFoldGetElementPtr work for scalable vectors of indices
This patch fixes a compiler crash that was hit when trying to simplify
the following code:

getelementptr [2 x i64], [2 x i64]* null, i64 0, <vscale x 2 x i64> zeroinitializer

For the case where we have a null pointer value like above, we just
need to ensure we don't assume the indices are always fixed width.

Differential Revision: https://reviews.llvm.org/D82183
2020-06-25 07:28:19 +01:00
Max Kazantsev 4c6548222b [Test] Add more tests for selects & phis 2020-06-25 10:54:07 +07:00
Max Kazantsev 1eeb714787 [InstCombine] Combine select & Phi by same condition
This patch transforms
```
p = phi [x, y]
s = select cond, z, p
```
with
```
s = phi[x, z]
```
if we can prove that the Phi node takes values basing on select's condition.

Differential Revision: https://reviews.llvm.org/D82072
Reviewed By: nikic
2020-06-25 10:44:10 +07:00
Craig Topper a5041987ed [X86] Emit a reg-reg copy for fast isel of vector bitcasts.
Previously we just updated a map and moved on. But it possible
we cached known bits information with the vreg that can be used by
another basic block. If the other basic block has a different view
of the VT these known bits won't make sense.

By emitting a copy we ensure we have different vregs before and
after the bitcast. This prevents the known bits from being used
with the wrong type.

Differential Revision: https://reviews.llvm.org/D82517
2020-06-24 20:15:21 -07:00
Wang, Pengfei b2eb1c5793 [X86] Fix a typo error.
Summary: This will result opcode MULX32Hrm been emitted to MULX32Hrr.

Reviewed by: craig.topper

Differential Revision: https://reviews.llvm.org/D82472
2020-06-25 10:06:27 +08:00
Pengfei Wang bcb75344a5 [X86][NFC] Pre-commit test case for the following patch. 2020-06-24 18:37:01 -07:00
Sid Manning e5911de377 [Hexagon][llvm-objcopy] Add missing check for SHN_HEXAGON_SCOMMON_1
Differential Revision: https://reviews.llvm.org/D82484
2020-06-24 19:56:01 -05:00
Michele Scandale 413a187856 [Inliner] Handle 'no-signed-zeros-fp-math' function attribute.
All other floating point math optimization related attribute are merged
in a conservative way during function inlining. This commit adds the
merge rule for the 'no-signed-zeros-fp-math' attribute.

Differential Revision: https://reviews.llvm.org/D81714
2020-06-24 17:53:59 -07:00
Amara Emerson 090c108d04 Don't inline dynamic allocas that simplify to huge static allocas.
Some sequences of optimizations can generate call sites which may never be
executed during runtime, and through constant propagation result in dynamic
allocas being converted to static allocas with very large allocation amounts.

The inliner tries to move these to the caller's entry block, resulting in the
stack limits being reached/bypassed. Avoid inlining functions if this would
result.

The threshold of 64k currently doesn't get triggered on the test suite with an
-Os LTO build on arm64, care should be taken in changing this in future to avoid
needlessly pessimising inlining behaviour.

Differential Revision: https://reviews.llvm.org/D81765
2020-06-24 17:39:03 -07:00
Kirill Naumov 7f094f7f9d [InlineCost] PrinterPass prints constants to which instructions are simplified
This patch enables printing of constants to see which instructions were
constant-folded. Needed for tests and better visiual analysis of
inliner's work.

Reviewers: apilipenko, mtrofin, davidxl, fedor.sergeev

Reviewed By: mtrofin

Differential Revision: https://reviews.llvm.org/D81024
2020-06-24 22:52:31 +00:00
Fangrui Song 546be08837 [llvm-profdata] --hot-func-list: fix some style issues in D81800
Reviewed By: wenlei, hoyFB

Differential Revision: https://reviews.llvm.org/D82500
2020-06-24 15:17:03 -07:00
Scott Linder 4d81aec40c [MIR] Fix CFI_INSTRUCTION escape printing
Summary:
The printer seems to intend to not print the trailing comma but has a
copy-paste error for the last value in the escape, and the parser
enforces having no trailing comma, but somehow a test was never included
to actually confirm it.

Reviewers: thegameg, arsenm

Reviewed By: thegameg, arsenm

Subscribers: wdng, arsenm, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82478
2020-06-24 18:15:28 -04:00
Roman Lebedev 0c22147027
[NFCI][InstSimplify] Add CHECK-LABEL to new icmp.ll test 2020-06-25 01:10:35 +03:00
Roman Lebedev 8911a35180
[SROA] convertValue(): we can have <N x iK*> to <M x iQ> cast
Provided test case crashes otherwise.
Much like to the opposite case.
2020-06-25 00:58:54 +03:00
Roman Lebedev 07a23c06dd
[SROA] convertValue(): we can have <N x iK> to <M x iQ*> cast
Provided test case crashes otherwise.

If NewTy is already DL.getIntPtrType(NewTy),
CreateBitCast() won't actually create any bitcast,
so we are better off just doing the general thing.
2020-06-25 00:58:53 +03:00
Roman Lebedev 2b8d706b19
[IR] GetUnderlyingObject(), stripPointerCastsAndOffsets(): don't crash on `bitcast <1 x i8*> to i8*`
I'm not sure how to write standalone tests for each of two changes here.
If either one of these two fixes is missing, the test fill crash.
2020-06-25 00:58:53 +03:00
Roman Lebedev 381054a989
[InstCombine] visitBitCast(): do not crash on weird `bitcast <1 x i8*> to i8*`
Even if we know that RHS of a bitcast is a pointer,
we can't assume LHS is, because it might be
a single-element vector of pointer.
2020-06-25 00:58:53 +03:00
Yuanfang Chen ebc88811b5 Remove Passes dependency on CodeGen
The dependency was introduced in
5134020ea6. The only functional change
from this removal would be the new PM interface for the two codegen
passes. This is not necessary since we don't have codegen pipeline using
new PM yet. This removal is to break the potential circular dependency between
Passes and CodeGen once the codegen begins to gain new PM support.
2020-06-24 14:52:46 -07:00
Mitch Phillips 10045cbe01 Revert "[BitcodeReader] Fix DelayedShuffle handling for ConstantExpr shuffles."
Patch has a memory leak bug that broke the ASan buildbots. More info
available at: https://reviews.llvm.org/D80330

This reverts commit b5740105d2.
2020-06-24 14:40:45 -07:00
Stefan Agner b7d41a11cd [ARM] Make cp10 and cp11 usage a warning
The ARM ARM considers p10/p11 valid arguments for MCR/MRC instructions.
MRC instructions with p10 arguments are also used in kernel code which
is shared for different architectures. Turn usage of p10/p11 to warnings
for ARMv7/ARMv8-M.

Reviewers: rengolin, olista01, t.p.northover, efriedma, psmith, simon_tatham

Reviewed By: simon_tatham

Subscribers: hiraditya, danielkiss, jcai19, tpimh, nickdesaulniers, peter.smith, javed.absar, kristof.beyls, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59733
2020-06-24 23:37:54 +02:00
Kirill Naumov 6a5d7d498c [InlineCost] InlineCostAnnotationWriterPass introduced
This class allows to see the inliner's decisions for better
optimization verifications and tests. To use, use flag
"-passes="print<inline-cost>"".

This is the second attempt to integrate the patch.
The problem from the first try has been discussed and
fixed in D82205.

Reviewers: apilipenko, mtrofin, davidxl, fedor.sergeev

Reviewed By: mtrofin

Differential revision: https://reviews.llvm.org/D81743
2020-06-24 21:27:07 +00:00
Amy Kwan d82f26cc4b [PowerPC][Power10] Implement Count Leading/Trailing Zeroes Builtins under bit Mask in LLVM/Clang
This patch implements builtins for the following prototypes:

unsigned long long __builtin_cntlzdm (unsigned long long, unsigned long long)
unsigned long long __builtin_cnttzdm (unsigned long long, unsigned long long)
vector unsigned long long vec_cntlzm (vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_cnttzm (vector unsigned long long, vector unsigned long long)

Differential Revision: https://reviews.llvm.org/D80941
2020-06-24 16:03:45 -05:00
Sanjay Patel 26fd3ffa78 [x86][AArch64] add tests for fmul-fma combine; NFC
As discussed in D80801, there's a possible overstep in
what is allowed by the 'contract' fast-math-flag.
2020-06-24 15:56:32 -04:00
weihe 53cf53023c Add --hot-func-list to llvm-profdata show for sample profiles
Summary:
Add the --hot-func-list feature to llvm-profdata show for sample profiles. This feature prints a list of hot functions whose max sample count are above the 99% threshold, with their numbers of total samples, total samples percentage, max samples, entry samples, and their function names.

Test Plan:

Reviewers: wenlei, hoyFB

Reviewed By: wenlei, hoyFB

Subscribers: hoyFB, wenlei, weihe, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82355
2020-06-24 12:49:46 -07:00
Alexander Shaposhnikov 395920a614 [llvm-objcopy] Update help message tests
This diff merges help message tests for llvm-objcopy, llvm-strip and
llvm-install-name-tool.

Patch by Sameer Arora!

Test plan: make check-all

Differential revision: https://reviews.llvm.org/D82012
2020-06-24 12:40:31 -07:00
Florian Hahn 35bb9bfbb0 [SLP] Limit GEP lists based on width of index computation.
D68667 introduced a tighter limit to the number of GEPs to simplify
together. The limit was based on the vector element size of the pointer,
but the pointers themselves are not actually put in vectors.

IIUC we try to vectorize the index computations here, so we should base
the limit on the vector element size of the computation of the index.

This restores the test regression on AArch64 and also restores the
vectorization for a important pattern in SPEC2006/464.h264ref on
AArch64 (@test_i16_extend). We get a large benefit from doing a single
load up front and then processing the index computations in vectors.

Note that we could probably even further improve the AArch64 codegen, if
we would do zexts to i32 instead of i64 for the sub operands and then do
a single vector sext on the result of the subtractions. AArch64 provides
dedicated vector instructions to do so. Sketch of proof in Alive:
https://alive2.llvm.org/ce/z/A4xYAB

Reviewers: craig.topper, RKSimon, xbolva00, ABataev, spatel

Reviewed By: ABataev, spatel

Differential Revision: https://reviews.llvm.org/D82418
2020-06-24 19:56:53 +01:00
Joel E. Denny ecb098c6de [FileCheck][NFC] Fix typo in test comment 2020-06-24 14:49:23 -04:00