Summary:
- Should consider viable ones only when checking SameSide candidates.
- Replace erasing with clearing viable flag to reduce data
moving/copying.
- Add one and revise another one as the diagnostic message are more
relevant compared to previous one.
Reviewers: tra
Subscribers: cfe-commits, yaxunl
Tags: #clang
Differential Revision: https://reviews.llvm.org/D67730
llvm-svn: 372318
Summary:
After revision 370919, this check incorrectly flags certain cases of implicit
constructors. Specifically, if an argument is annotated with an
argument-comment and the argument expression triggers an implicit constructor,
then the argument comment is associated with argument of the implicit
constructor.
However, this only happens when the constructor has more than one argument.
This revision fixes the check for implicit constructors and adds a regression
test for this case.
Note: r370919 didn't cause this bug, it simply uncovered it by fixing another
bug that was masking the behavior.
Reviewers: gribozavr
Subscribers: xazax.hun, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D67744
llvm-svn: 372317
As commented on D67557 we have a lot of uses of depth checks all using magic numbers.
This patch adds the SelectionDAG::MaxRecursionDepth constant and moves over some general cases to use this explicitly.
Differential Revision: https://reviews.llvm.org/D67711
llvm-svn: 372315
This broke the Chromium build, causing it to fail with e.g.
fatal error: error in backend: Cannot select: t362: v4i32 = X86ISD::VSHLI t392, Constant:i8<15>
See llvm-commits thread of r372285 for details.
This also reverts r372286, r372287, r372288, r372289, r372290, r372291,
r372292, r372293, r372296, and r372297, which seemed to depend on the
main commit.
> Encode them directly as an imm argument to G_INTRINSIC*.
>
> Since now intrinsics can now define what parameters are required to be
> immediates, avoid using registers for them. Intrinsics could
> potentially want a constant that isn't a legal register type. Also,
> since G_CONSTANT is subject to CSE and legalization, transforms could
> potentially obscure the value (and create extra work for the
> selector). The register bank of a G_CONSTANT is also meaningful, so
> this could throw off future folding and legalization logic for AMDGPU.
>
> This will be much more convenient to work with than needing to call
> getConstantVRegVal and checking if it may have failed for every
> constant intrinsic parameter. AMDGPU has quite a lot of intrinsics wth
> immarg operands, many of which need inspection during lowering. Having
> to find the value in a register is going to add a lot of boilerplate
> and waste compile time.
>
> SelectionDAG has always provided TargetConstant for constants which
> should not be legalized or materialized in a register. The distinction
> between Constant and TargetConstant was somewhat fuzzy, and there was
> no automatic way to force usage of TargetConstant for certain
> intrinsic parameters. They were both ultimately ConstantSDNode, and it
> was inconsistently used. It was quite easy to mis-select an
> instruction requiring an immediate. For SelectionDAG, start emitting
> TargetConstant for these arguments, and using timm to match them.
>
> Most of the work here is to cleanup target handling of constants. Some
> targets process intrinsics through intermediate custom nodes, which
> need to preserve TargetConstant usage to match the intrinsic
> expectation. Pattern inputs now need to distinguish whether a constant
> is merely compatible with an operand or whether it is mandatory.
>
> The GlobalISelEmitter needs to treat timm as a special case of a leaf
> node, simlar to MachineBasicBlock operands. This should also enable
> handling of patterns for some G_* instructions with immediates, like
> G_FENCE or G_EXTRACT.
>
> This does include a workaround for a crash in GlobalISelEmitter when
> ARM tries to uses "imm" in an output with a "timm" pattern source.
llvm-svn: 372314
We needn't BFI each lane individually into a predicate register when each lane
in the same. A simple sign extend and a vmsr will do.
Differential Revision: https://reviews.llvm.org/D67653
llvm-svn: 372313
After r372209, the compile command can end up including an argument with
quotes in it, e.g.
-fprofile-instr-use="/foo/bar.profdata"
when invoking the compiler with execute_process, the compiler ends up
getting that argument with quotes and all, and fails to open the file.
This all seems horribly broken, but one way of working around it is to
simply strip the quotes from the string here. If they were there to
protect a path that's got spaces in it, that wasn't going to work
anyway because the string is later split by spaces.
llvm-svn: 372312
Errors that occur when reading an MRI script now include a corresponding
line number.
Differential Revision: https://reviews.llvm.org/D67449
llvm-svn: 372309
Add an ability to specify the max full unroll count for LoopUnrollPass pass
in pass options.
Reviewers: fhahn, fedor.sergeev
Reviewed By: fedor.sergeev
Subscribers: hiraditya, zzheng, dmgreen, llvm-commits
Differential Revision: https://reviews.llvm.org/D67701
llvm-svn: 372305
The clang intrinsic __builtin_preserve_access_index() currently
has signature:
const void * __builtin_preserve_access_index(const void * ptr)
This may cause compiler warning when:
- parameter type is "volatile void *" or "const volatile void *", or
- the assign-to type of the intrinsic does not have "const" qualifier.
Further, this signature does not allow dereference of the
builtin result pointer as it is a "const void *" type, which
adds extra step for the user to do type casting.
Let us change the signature to:
PointerT __builtin_preserve_access_index(PointerT ptr)
such that the result and argument types are the same.
With this, directly dereferencing the builtin return value
becomes possible.
Differential Revision: https://reviews.llvm.org/D67734
llvm-svn: 372294
This needs special handling due to some subtargets that have a
nonstandard register layout for f16 vectors
Also reject some illegal types on other targets.
llvm-svn: 372293
Encode them directly as an imm argument to G_INTRINSIC*.
Since now intrinsics can now define what parameters are required to be
immediates, avoid using registers for them. Intrinsics could
potentially want a constant that isn't a legal register type. Also,
since G_CONSTANT is subject to CSE and legalization, transforms could
potentially obscure the value (and create extra work for the
selector). The register bank of a G_CONSTANT is also meaningful, so
this could throw off future folding and legalization logic for AMDGPU.
This will be much more convenient to work with than needing to call
getConstantVRegVal and checking if it may have failed for every
constant intrinsic parameter. AMDGPU has quite a lot of intrinsics wth
immarg operands, many of which need inspection during lowering. Having
to find the value in a register is going to add a lot of boilerplate
and waste compile time.
SelectionDAG has always provided TargetConstant for constants which
should not be legalized or materialized in a register. The distinction
between Constant and TargetConstant was somewhat fuzzy, and there was
no automatic way to force usage of TargetConstant for certain
intrinsic parameters. They were both ultimately ConstantSDNode, and it
was inconsistently used. It was quite easy to mis-select an
instruction requiring an immediate. For SelectionDAG, start emitting
TargetConstant for these arguments, and using timm to match them.
Most of the work here is to cleanup target handling of constants. Some
targets process intrinsics through intermediate custom nodes, which
need to preserve TargetConstant usage to match the intrinsic
expectation. Pattern inputs now need to distinguish whether a constant
is merely compatible with an operand or whether it is mandatory.
The GlobalISelEmitter needs to treat timm as a special case of a leaf
node, simlar to MachineBasicBlock operands. This should also enable
handling of patterns for some G_* instructions with immediates, like
G_FENCE or G_EXTRACT.
This does include a workaround for a crash in GlobalISelEmitter when
ARM tries to uses "imm" in an output with a "timm" pattern source.
llvm-svn: 372285
Summary:
This was always the intended behavior, but had not been
implemented. This ordering is important for Emscripten when generating
.mem files while compiling to JS, since only zeros at the end of
initialized memory can be dropped.
Fixes https://github.com/emscripten-core/emscripten/issues/8999
Reviewers: sbc100
Subscribers: dschuff, jgravelle-google, aheejin, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67736
llvm-svn: 372284
Due to usage of an uninitialized fields, we end up with
a Conditional jump or move depends on uninitialised value
Fixes https://bugs.llvm.org/show_bug.cgi?id=40547
Commited on behalf of Martin Liska <mliska@suse.cz>
llvm-svn: 372281
Make the method MachOUniversalBinary::getObjectForArch return MachOUniversalBinary::ObjectForArch
and add helper methods MachOUniversalBinary::getMachOObjectForArch, MachOUniversalBinary::getArchiveForArch
for those who explicitly expect to get a MachOObjectFile or an Archive.
Differential revision: https://reviews.llvm.org/D67700
Test plan: make check-all
llvm-svn: 372278
update_{llc,mir}_test_checks.py applicability is determined by the
output (assembly or MIR), not the input, which makes
update_llc_test_checks.py the right tool to generate tests that start at
MIR and stop at the final assembly.
This commit adds the minimal support for this path. Main limitation that
remains:
- MIR has to have LLVM IR section, and the CHECK lines will be inserted
into the LLVM IR functions that correspond to the MIR functions.
Running
../utils/update_llc_test_checks.py --llc-binary ./bin/llc
on a slightly modified ../test/CodeGen/X86/bad-tls-fold.mir
produces the following diff:
+# NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+# RUN: llc %s -o - | FileCheck %s
--- |
target triple = "x86_64-unknown-linux-gnu"
@@ -6,17 +7,31 @@
@i = external thread_local global i32
define i32 @or() {
+ ; CHECK-LABEL: or:
+ ; CHECK: # %bb.0: # %entry
+ ; CHECK-NEXT: movq {{.*}}(%rip), %rax
+ ; CHECK-NEXT: orq $7, %rax
+ ; CHECK-NEXT: movq i@{{.*}}(%rip), %rcx
+ ; CHECK-NEXT: orq %rax, %rcx
+ ; CHECK-NEXT: movl %fs:(%rcx), %eax
+ ; CHECK-NEXT: retq
entry:
ret i32 undef
}
-
define i32 @and() {
+ ; CHECK-LABEL: and:
+ ; CHECK: # %bb.0: # %entry
+ ; CHECK-NEXT: movq {{.*}}(%rip), %rax
+ ; CHECK-NEXT: orq $7, %rax
+ ; CHECK-NEXT: movq i@{{.*}}(%rip), %rcx
+ ; CHECK-NEXT: andq %rax, %rcx
+ ; CHECK-NEXT: movl %fs:(%rcx), %eax
+ ; CHECK-NEXT: retq
entry:
ret i32 undef
}
...
(not applied)
llvm-svn: 372277
Very minor change aiming to make it easier to extend the script
downstream to support non-llc, but llc-like tools. The main objective is
to decrease the probability of merge conflicts.
llvm-svn: 372276
Summary:
Large slowdowns were observed in Rust due to many small, constant
sized copies in conjunction with poorly-optimized memory.copy
implementations. Since memory.copy cannot be expected to be inlined
efficiently by engines at this time, stop using it for the smallest
copies. We continue to lower all memcpy intrinsics to memory.copy,
though.
Reviewers: aheejin, alexcrichton
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, JDevlieghere, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67639
llvm-svn: 372275
Since we now lower most tail calls, it makes sense to support musttail.
Instead of always falling back to SelectionDAG, only fall back when a musttail
call was not able to be emitted as a tail call. Once we can handle most
incoming and outgoing arguments, we can change this to a `report_fatal_error`
like in ISelLowering.
Remove the assert that we don't have varargs and a musttail, and replace it
with a return false. Implementing this requires that we implement
`saveVarArgRegisters` from AArch64ISelLowering, which is an entirely different
patch.
Add GlobalISel lines to vararg-tallcall.ll to make sure that we produce correct
code. Right now we only fall back, but eventually this will be relevant.
Differential Revision: https://reviews.llvm.org/D67681
llvm-svn: 372273
DIFlagBlockByRefStruct is an unused DIFlag that originally was used by
clang to express (Objective-)C block captures in debug info. For the
last year Clang has been emitting complex DIExpressions to describe
block captures instead, which makes all the code supporting this flag
redundant.
This patch removes the flag and all supporting "dead" code, so we can
reuse the bit for something else in the future.
Since this only affects debug info generated by Clang with the block
extension this mostly affects Apple platforms and I don't have any
bitcode compatibility concerns for removing this. The Verifier will
reject debug info that uses the bit and thus degrade gracefully when
LTO'ing older bitcode with a newer compiler.
rdar://problem/44304813
Differential Revision: https://reviews.llvm.org/D67453
llvm-svn: 372272
Summary:
https://bugs.llvm.org/show_bug.cgi?id=43102
In today's edition of "Is this any better now that it isn't crashing?", I'd like to show you a very interesting test case with loop widening.
Looking at the included test case, it's immediately obvious that this is not only a false positive, but also a very bad bug report in general. We can see how the analyzer mistakenly invalidated `b`, instead of its pointee, resulting in it reporting a null pointer dereference error. Not only that, the point at which this change of value is noted at is at the loop, rather then at the method call.
It turns out that `FindLastStoreVisitor` works correctly, rather the supplied explodedgraph is faulty, because `BlockEdge` really is the `ProgramPoint` where this happens.
{F9855739}
So it's fair to say that this needs improving on multiple fronts. In any case, at least the crash is gone.
Full ExplodedGraph: {F9855743}
Reviewers: NoQ, xazax.hun, baloghadamsoftware, Charusso, dcoughlin, rnkovacs, TWeaver
Subscribers: JesperAntonsson, uabelho, Ka-Ka, bjope, whisperity, szepet, a.sidorin, mikhail.ramalho, donat.nagy, dkrupp, gamesh411, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D66716
llvm-svn: 372269
Summary:
Add function to AutoUpgrade to change the datalayout of old X86 datalayout strings.
This adds "-p270:32:32-p271:32:32-p272:64:64" to X86 datalayouts that are otherwise valid
and don't already contain it.
This also removes the compatibility changes in https://reviews.llvm.org/D66843.
Datalayout change in https://reviews.llvm.org/D64931.
Reviewers: rnk, echristo
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67631
llvm-svn: 372267