Commit Graph

357372 Commits

Author SHA1 Message Date
Tyker 3bab88b7ba Prevent IR-gen from emitting consteval declarations
Summary: with this patch instead of emitting calls to consteval function. the IR-gen will emit a store of the already computed result.

Reviewers: rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76420
2020-06-15 10:47:14 +02:00
Kirill Bobyrev 7808bf8431
NFC: Make sure function arguments have the same name in declaration and definition
This code generates Clang-Tidy warnings otherwise.
2020-06-15 10:45:08 +02:00
Alexander Belyaev cd320446f4 [mlir][shape] Lower Shape `ConstSizeOp` to Standard `ConstantOp`.
Differential Revision: https://reviews.llvm.org/D81735
2020-06-15 10:42:05 +02:00
Sam Parker 2596da3174 [CostModel] getCFInstrCost in getUserCost.
Have BasicTTI call the base implementation so that both agree on the
default behaviour, which the default being a cost of '1'. This has
required an X86 specific implementation as it seems to be very
reliant on those instructions being free. Changes are also made to
AMDGPU so that their implementations distinguish between cost kinds,
so that the unrolling isn't affected. PowerPC also has its own
implementation to prevent changes to the reg-usage vectorizer test.

The cost model test changes now reflect that ret instructions are not
generally free.

Differential Revision: https://reviews.llvm.org/D79164
2020-06-15 09:28:46 +01:00
Kirill Bobyrev 2d8f8c4de3
[lldb] Handle all Clang::Type::Builtin enums
Cleanup after https://reviews.llvm.org/D81459
2020-06-15 10:18:59 +02:00
Kristina Bessonova 5a39bf2dc5 [CMake][runtimes] Skip adding 2nd set of the same variables for a generic target
No need to parse and add the same variables twice if runtimes is being
built for a generic target (i.e. w/o multilib).

Reviewed By: phosek

Differential Revision: https://reviews.llvm.org/D81574
2020-06-15 09:59:27 +02:00
Sam Parker 321ebfd175 [NFCI][CostModel] Unify FNeg cost
Enable TTIImpl::getUserCost to handle FNeg so that
getInstructionThroughput can call that instead. This means we can
remove the code in the AMDGPU backend too.

Differential Revision: https://reviews.llvm.org/D81635
2020-06-15 08:33:04 +01:00
Nikita Popov 7cac7e0cfc [IR] Prefer hasFnAttribute() where possible (NFC)
When checking for an enum function attribute, use hasFnAttribute()
rather than hasAttribute() at FunctionIndex, because it is
significantly faster (and more concise to boot).
2020-06-15 09:30:35 +02:00
Sam Parker 51541c068a [CostModel] Unify ExtractElement cost.
Move the cost modelling, with the reduction pattern matching, from
getInstructionThroughput into generic TTIImpl::getUserCost. The
modelling in the AMDGPU backend can now be removed.

Differential Revision: https://reviews.llvm.org/D81643
2020-06-15 08:27:14 +01:00
Max Kazantsev 60da4369a1 [NFC] Bail early simplifying unconditional branches 2020-06-15 13:59:53 +07:00
Fangrui Song 6c7aebbc01 [msan] Fix comment of __msan::Origin::isHeapOrigin 2020-06-14 23:58:49 -07:00
Sam Parker 3e39760f8e Revert "Return "[InstCombine] Simplify compare of Phi with constant inputs against a constant""
This reverts commit 23291b9863.

This caused performance regressions.
2020-06-15 07:46:28 +01:00
Sander de Smalen 98100353d7 [SVE] Ensure proper mangling of ACLE tuple types
The AAPCS specifies that the tuple types such as `svint32x2_t`
should use their `arm_sve.h` names when mangled instead of their
builtin names.

This patch also renames the internal types for the tuples to
be prefixed with `__clang_`, so they are not misinterpreted as
specified internal types like the non-tuple types which *are* defined
in the AAPCS. Using a builtin type for the tuples is a purely
a choice of the Clang implementation.

Reviewers: rsandifo-arm, c-rhodes, efriedma, rengolin

Reviewed By: efriedma

Tags: #clang

Differential Revision: https://reviews.llvm.org/D81721
2020-06-15 07:36:12 +01:00
Sander de Smalen 91a4a592ed [SveEmitter] Add SVE tuple types and builtins for svundef.
This patch adds new SVE types to Clang that describe tuples of SVE
vectors. For example `svint32x2_t` which maps to the twice-as-wide
vector `<vscale x 8 x i32>`. Similarly, `svint32x3_t` will map to
`<vscale x 12 x i32>`.

It also adds builtins to return an `undef` vector for a given
SVE type.

Reviewers: c-rhodes, david-arm, ctetreau, efriedma, rengolin

Reviewed By: c-rhodes

Tags: #clang

Differential Revision: https://reviews.llvm.org/D81459
2020-06-15 07:36:01 +01:00
Vitaly Buka ca2dcbd030 [SafeStack,NFC] Make StackColoring read-only
Move core which removes markers out of StackColoring.
2020-06-14 23:05:43 -07:00
Vitaly Buka c6426e2657 [SafeStack,NFC] Remove unneded branch 2020-06-14 23:05:43 -07:00
Vitaly Buka 7282da1ea8 [SafeStack,NFC] Fix naming style 2020-06-14 23:05:42 -07:00
Vitaly Buka 2f5e535a84 [SafeStack,NFC] Cleanup LiveRange interface 2020-06-14 23:05:42 -07:00
Vitaly Buka adefa9ca2e [SafeStack,NFC] "const" cleanup 2020-06-14 23:05:42 -07:00
Vitaly Buka fb1e0f324f [SafeStack,NFC] Add BlockLifetimeInfo constructor 2020-06-14 23:05:42 -07:00
Vitaly Buka 645058036a [SafeStack,NFC] Use IntrinsicInst instead of Instruction 2020-06-14 23:05:41 -07:00
Vitaly Buka f8e411656e [SafeStack,NFC] Move ClColoring into SafeStack.cpp
This allows to reuse the code in other components.
2020-06-14 23:05:41 -07:00
Vitaly Buka 05590a9cb8 [SafeStack,NFC] Move unconditional code into constructor
Prepare to move ClColoring from SafeStackCode to SafeStackLayout.
This will allow to reuse the code in other components.
2020-06-14 23:05:41 -07:00
Max Kazantsev 344eaf7827 [Test] Update test with check script, add two more motivating cases 2020-06-15 12:41:46 +07:00
Chen Zheng bd7096b977 [PowerPC] fma chain break to expose more ILP
This patch tries to reassociate two patterns related to FMA to expose
more ILP on PowerPC.

// Pattern 1:
//   A =  FADD X,  Y          (Leaf)
//   B =  FMA  A,  M21,  M22  (Prev)
//   C =  FMA  B,  M31,  M32  (Root)
// -->
//   A =  FMA  X,  M21,  M22
//   B =  FMA  Y,  M31,  M32
//   C =  FADD A,  B

// Pattern 2:
//   A =  FMA  X,  M11,  M12  (Leaf)
//   B =  FMA  A,  M21,  M22  (Prev)
//   C =  FMA  B,  M31,  M32  (Root)
// -->
//   A =  FMUL M11,  M12
//   B =  FMA  X,  M21,  M22
//   D =  FMA  A,  M31,  M32
//   C =  FADD B,  D

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D80175
2020-06-15 00:00:04 -04:00
Wenlei He b559535a3a [NewPM] Avoid redundant CGSCC run for updated SCC
Summary:
When an SCC got split due to inlining, we have two mechanisms for reprocessing the updated SCC, first is UR.UpdatedC
that repeatedly rerun the new, current SCC; second is a worklist for all newly split SCCs. We can avoid rerun of
the same SCC when the SCC is set to be processed by both mechanisms *back to back*. In pathological cases, such redundant
rerun could cause exponential size growth due to inlining along cycles, even when there's no SCC mutation and hence
convergence is not a problem.

Note that it's ok to have SCC updated and rerun immediately, and also in the work list if we have actually moved an SCC
to be topologically "below" the current one due to merging. In that case, we will need to revisit the current SCC after
those moved SCCs. For that reason, the redundant avoidance here only targets back to back rerun of the same SCC - the
case described by the now removed FIXME comment.

Reviewers: chandlerc, wmi

Subscribers: llvm-commits, hoy

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D80589
2020-06-14 19:54:52 -07:00
Kang Zhang 74abe50071 [PowerPC] Add some InstAlias for mtspr/mfspr instructions
Summary:

We have defined MTSPR/MFSPR and MTSPR8/MFSPR8, but we only defined
mtspr/mfspr InstAlias for some MTSPR/MFSPR.
This patch is to add the InstAlias definitions for MTSPR8/MFSPR8,
and add the some new mtspr/mfspr InstAlias we may use.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D77531
2020-06-15 02:43:13 +00:00
Jez Ng 337fb8c767 [lld-macho] Set REQUIRES: x86 on more tests
Summary: Fixes the build break caused by D81802.
2020-06-14 19:05:12 -07:00
Chen Zheng 163162a0a4 [PowerPC] fold a bug for rlwinm folding when with full mask.
Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D81006
2020-06-14 21:27:01 -04:00
Jez Ng 53c796b948 [lld-macho] Properly handle & validate relocation r_length
Summary:
We should be reading / writing our addends / relocated addresses based on
r_length, and not just based on the type of the relocation. But since only
some r_length values are valid for a given reloc type, I've also added some
validation.

ld64 has code to allow for r_length = 0 in X86_64_RELOC_BRANCH relocs, but I'm
not sure how to create such a relocation...

Reviewed By: smeenai

Differential Revision: https://reviews.llvm.org/D80854
2020-06-14 16:35:23 -07:00
Jez Ng 51c5baacf3 [lld-macho] No need to explicitly specify -arch in tests
Summary: After {D81326} landed, some tests started failing if they did
not have `-arch` specified. I think one of the reasons happened was due
to the fact that we were taking a reference to a temporary value that
was freed too early. Fixing that got the error to go away on my local
Linux machine.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D81802
2020-06-14 16:35:21 -07:00
Simon Pilgrim 3d8149c2a1 [X86][SSE] Fold BITOP(MOVMSK(X),MOVMSK(Y)) -> MOVMSK(BITOP(X,Y))
Reduce XMM->GPR traffic by performing bitops on the vectors, and using a single MOVMSK call.

This requires us to use vectors of the same size and element width, but we can mix fp/int type equivalents with suitable bitcasting.
2020-06-14 21:37:58 +01:00
Nikita Popov 5184857c62 [IR] Remove unused IndexAttrPair typedef (NFC)
This was part of an older attributes implementation.
2020-06-14 22:27:17 +02:00
Nikita Popov 5f565c0419 [IR] Support efficient AssertingVH/PoisoningVH lookup
Currently, there doesn't seem to be any way to look up a Value*
in a map/set indexed by AssertingVH/PoisoningVH, without creating
a value handle -- which is fairly expensive, because it involves
adding the value handle to the use list and immediately removing
it again. Using find_as(Value *) does not work (and is in fact
worse than just using find(Value *)), because it will end up
creating multiple value handles during the lookup itself.

For AssertingVH, address this by simply using DenseMapInfo<T *>
instead of manually implementing something. The AssertingVH<T>
will now get coerced to T*, rather than the other way around.

For PoisoningVH, add extra overloads of getHashValue() and
isEqual() that accept a T* argument.

This allows using find_as(Value *) to perform efficient lookups
in assertion-enabled builds.

Differential Revision: https://reviews.llvm.org/D81793
2020-06-14 22:03:03 +02:00
Florian Hahn 6176f04436 [LAA] Do not set CanDoRT to false for AS that do not need RT checks.
Alternative approach to D80570.

canCheckPtrAtRT already contains checks the figure out for which alias
sets runtime checks are needed. But it currently sets CanDoRT to false
for alias sets for which we cannot do RT checks but also do not need
any.

If we know that we do not need RT checks based on the number of
reads/writes in the alias set, we can skip processing the AS.

This patch also adds an assertion to ensure that DepCands does not
contain more than one write from the alias set.

Reviewers: Ayal, anemet, hfinkel, dmgreen

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D80622
2020-06-14 20:55:59 +01:00
Whitney Tsang 5225cd43e8 [LoopUnroll] Allow loops with multiple exiting blocks where loop latch
is not necessary one of them.

Summary: Currently LoopUnrollPass already allow loops with multiple
exiting blocks, but it is only allowed when the loop latch is one of the
exiting blocks.
When the loop latch is not an exiting block, then only single exiting
block is supported.
When possible, the single loop latch or the single exiting block
terminator is optimized to an unconditional branch in the unrolled loop.

This patch allows loops with multiple exiting blocks even if the loop
latch is not one of them. However, the optimization of exiting block
terminator to unconditional branch is not done when there exists more
than one exiting block.
Reviewer: dmgreen, Meinersbur, etiotto, fhahn, efriedma, bmahjour
Reviewed By: efriedma
Subscribers: hiraditya, zzheng, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D81053
2020-06-14 18:44:18 +00:00
Matt Arsenault df0c4bfc95 AMDGPU: Add some baseline immediate encoding test changes
Add some encoding checks and add a few new cases.
2020-06-14 13:29:35 -04:00
Matt Arsenault 804397dde6 AMDGPU: Do not bundle inline asm
Fixes bug 46285
2020-06-14 13:24:50 -04:00
Matt Arsenault 82c313ca8f GlobalISel: Add some basic getters to GISelKnownBits 2020-06-14 13:14:18 -04:00
Matt Arsenault fb51d508ee AMDGPU/GlobalISel: Select general case for G_PTRMASK 2020-06-14 13:12:29 -04:00
Matt Arsenault 46579471fd AMDGPU: Fix spill/restore of 192-bit registers
I tried to use an IR inline asm test, but that doesn't work since the
inline asm handling asserts without an MVT to use.
2020-06-14 13:12:01 -04:00
Simon Pilgrim 1c3d7709de [X86][SSE] Add tests for missing BITOP(MOVMSK(X),MOVMSK(Y)) -> MOVMSK(BITOP(X,Y)) fold
This would help reduce XMM->GPR traffic for some reduction cases.
2020-06-14 17:10:03 +01:00
Qiu Chaofan 13edcd696e [PowerPC] Support constrained rounding operations
This patch adds handling of constrained FP intrinsics about round,
truncate and extend for PowerPC target, with necessary tests.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D64193
2020-06-14 23:43:31 +08:00
Qiu Chaofan 7315d221a2 [PowerPC] Exploit vnmsubfp instruction
On PowerPC, we have vnmsubfp Altivec instruction for fnmsub operation on
v4f32 type. Default pattern for this instruction never works since we
don't have legal fneg for v4f32 when VSX disabled.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D80617
2020-06-14 23:19:17 +08:00
Qiu Chaofan f8ef7c99a0 [DAGCombiner] Require ninf for division estimation
Current implementation of division estimation isn't correct for some
cases like 1.0/0.0 (result is nan, not expected inf).

And this change exposes a potential infinite loop: we use
isConstOrConstSplatFP in combineRepeatedFPDivisors to look up if the
divisor is some constant. But it doesn't work after legalized on some
platforms. This patch restricts the method to act before LegalDAG.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D80542
2020-06-14 22:58:22 +08:00
Sanjay Patel 098e48a6a1 [PassManager] restore early-cse to vector cleanup
As noted in D80236 - the early-cse pass was included here before:
D75145 / rG71a316883d50
But it got moved outside of the "extra" option there, then it
got dropped while adjusting -vector-combine:
rG6438ea45e053
rG57bb4787d72f

So this is restoring the behavior and adding a test to prevent
accidental changes again. I don't see an equivalent option for
the new pass manager.
2020-06-14 10:04:53 -04:00
Joachim Protze d056d7592a [OpenMP][Tool] Extend reuse of OMPT testing
This patch allows to specify a prefix (default:empty) to be included into print-out
written by callback.h.
Also adding a cmake target to find the header file from other tests.

Reviewed by: jdoerfert

Differential Revision: https://reviews.llvm.org/D76008
2020-06-14 15:55:32 +02:00
Joachim Protze add8d90cb3 [OpenMP] support alloc of serialized tasks
Reviewed by: AndreyChurbanov

Differential Revision: https://reviews.llvm.org/D81497
2020-06-14 15:55:32 +02:00
Nikita Popov 862db369f8 [LVI] Fix class indentation (NFC)
This class uses a mix of different indentation levels, normalize it.
2020-06-14 15:42:27 +02:00
Nikita Popov 83e7230e5a [LVI] Cache lookup of experimental.guard intrinsic (NFC)
When LVI is performing assume intersections, it also checks for
llvm.experimental.guard intrinsics. To avoid unnecessary block
scans, it first checks whether this intrinsic is declared in the
module at all. I've noticed that we end up spending quite a lot
of time looking up that function again and again...

Avoid this by only looking it up once when LazyValueInfo is
constructed. This of course assumes that we don't introduce new
guard intrinsics (which is the case for all existing uses of LVI --
and even if it weren't, it would not introduce miscompiles, just
potentially lose optimization power.)

Differential Revision: https://reviews.llvm.org/D81796
2020-06-14 15:32:30 +02:00