The code paths in the absence of TargetMachine, TargetLowering or
TargetRegisterInfo are poorly tested. As rL285987 said, requiring
TargetPassConfig allows us to delete many (untested) checks littered
everywhere.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D73754
Summary:
this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html
this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.
Reviewers: jdoerfert
Reviewed By: jdoerfert
Subscribers: mgrang, fhahn, mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72475
Summary:
this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html
this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.
Reviewers: jdoerfert
Reviewed By: jdoerfert
Subscribers: mgrang, fhahn, mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72475
Summary:
this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html
this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.
Reviewers: jdoerfert
Reviewed By: jdoerfert
Subscribers: mgrang, fhahn, mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72475
We want to allow splat value transforms to improve PR44588 and related bugs:
https://bugs.llvm.org/show_bug.cgi?id=44588
...but to do that, we need to know if values are splatted from the same,
specific index (lane) rather than splatted from an arbitrary index.
We can improve the undef handling with 1-liner follow-ups because the
Constant API optionally allow undefs now.
Differential Revision: https://reviews.llvm.org/D73549
Summary:
this patch makes tablegen generate llvm attributes in a more generic and simpler (at least to me).
changes: make tablegen generate
...
ATTRIBUTE_ENUM(Alignment,align)
ATTRIBUTE_ENUM(AllocSize,allocsize)
...
which can be used to generate most of what was previously used and more.
Tablegen was also generating attributes from 2 identical files leading to identical output. so I removed one of them and made user use the other.
Reviewers: jdoerfert, thakis, aaron.ballman
Reviewed By: aaron.ballman
Subscribers: mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72455
Summary:
this is part of the implementation of http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html
this patch gives the basis of building an assume to preserve all information from an instruction and add support for building an assume that preserve the information from a call.
Reviewers: jdoerfert
Reviewed By: jdoerfert
Subscribers: mgrang, fhahn, mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72475
Summary:
this patch makes tablegen generate llvm attributes in a more generic and simpler (at least to me).
changes: make tablegen generate
...
ATTRIBUTE_ENUM(Alignment,align)
ATTRIBUTE_ENUM(AllocSize,allocsize)
...
which can be used to generate most of what was previously used and more.
Tablegen was also generating attributes from 2 identical files leading to identical output. so I removed one of them and made user use the other.
Reviewers: jdoerfert, thakis, aaron.ballman
Reviewed By: aaron.ballman
Subscribers: mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72455
This reverts commit e34801c8e6 and the followup due to multiple
problems.
I've tried to keep the tests and RDA parts where possible, as those
still seem useful.
Summary:
Start filling in some requirements for the shape function descriptions
that will be used to derive shape computations. This requiement part may
later be reworked to be part of the "context" section of shape dialect. Without
examples this may be a bit too abstract but I hope not (given mappings to
existing shape functions).
Differential Revision: https://reviews.llvm.org/D73572
The current FirstMI.getDebugLoc() is actually null in almost all cases.
If it isn't, the generated .loc will be considered initial. The .loc
will have the prologue_end flag and terminate the prologue prematurely.
Also use an overload of BuildMI that will not prepend
PATCHABLE_FUNCTION_ENTRY to a MachineInstr bundle.
Summary:
The return type of 'PointerUnion::is' has been 'int' since it was first
added in March 2009, in SVN r67987, or
https://github.com/llvm/llvm-project/commit/a9c6de15fb3.
The only other change to this member function was a clang-format applied
in December 2015, in SVN r256513, or
https://github.com/llvm/llvm-project/commit/548a49aacc0.
However, since the return value is the result of a `==` comparison, an
implicit cast must be made converting the boolean result to an `int`.
Change the return type to `bool` to remove the need for such a cast.
Test Plan:
I ran llvm-project `check-all` under ASAN, no failures were reported
(other than obviously unrelated tests that were already failing in
ASAN buildbots).
Reviewers: gribozavr, gribozavr2, rsmith, bkramer, dblaikie
Subscribers: dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73836
Summary:
Virtual registers that are undef have an empty LiveInterval at this
point, which means beginIndex() and endIndex() cannot be used. We
only need those indices to determine the range in which to scan for
affected other NSA instructions, and undef operands cannot contribute
to that range.
Reviewers: arsenm, rampitec, mareko
Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73831
We were checking that the original Value * for the compare operands
were null. But that can never happen.
I believe we intended to check for 0 registers here instead.
Fixes PR44749.
This is an alternate fix for the issue D73606 was trying to
solve.
The main issue here is that we bailed out of
foldOffsetIntoAddress if Offset is 0. But if we just found a
symbolic displacement and AM.Disp became non-zero
earlier, we still need to validate that AM.Disp with the symbolic
displacement.
This is my second attempt at committing this after failing
build bots previously. One thing I realized about the previous
attempt is that its possible that AM.Disp is already non-zero
and the new Offset changes it back to zero. In that case my
previous attempt failed to update AM.Disp to zero. So this patch
removes the early out for 0 and appropriately handle the 0 case
in each check so we still update AM.Disp at the end.
Summary:
Prototype of a JIT compiler that utilizes ThinLTO summaries to compile modules ahead of time. This is an implementation of the concept I presented in my "ThinLTO Summaries in JIT Compilation" talk at the 2018 Developers' Meeting: http://llvm.org/devmtg/2018-10/talk-abstracts.html#lt8
Upfront the JIT first populates the *combined ThinLTO module index*, which provides fast access to the global call-graph and module paths by function. Next, it loads the main function's module and compiles it. All functions in the module will be emitted with prolog instructions that *fire a discovery flag* once execution reaches them. In parallel, the *discovery thread* is busy-watching the existing flags. Once it detects one has fired, it uses the module index to find all functions that are reachable from it within a given number of calls and submits their defining modules to the compilation pipeline.
While execution continues, more flags are fired and further modules added. Ideally the JIT can be tuned in a way, so that in the majority of cases the code on the execution path can be compiled ahead of time. In cases where it doesn't work, the JIT has a *definition generator* in place that loads modules if missing functions are reached.
Reviewers: lhames, dblaikie, jfb, tejohnson, pree-jackie, AlexDenisov, kavon
Subscribers: mgorny, mehdi_amini, inglorion, hiraditya, steven_wu, dexonsmith, arphaman, jfb, merge_guards_bot, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72486
This is based on this llvm-dev thread http://lists.llvm.org/pipermail/llvm-dev/2019-December/137521.html
The current strategy for f16 is to promote type to float every except where the specific width is required like loads, stores, and bitcasts. This results in rounding occurring in odd places instead of immediately after arithmetic operations. This interacts in weird ways with the __fp16 type in clang which is a storage only type where arithmetic is always promoted to float. InstCombine can remove some fpext/fptruncs around such arithmetic and turn it into arithmetic on half. This wouldn't be so bad if SelectionDAG was able to put those fpext/fpround back in when it promotes.
It is also not obvious how to handle to make the existing strategy work with STRICT fp. We need to use STRICT versions of the conversions which require chain operands. But if the conversions are created for a bitcast, there is no place to get an appropriate chain from.
This patch implements a different strategy where conversions are emitted directly around arithmetic operations. And otherwise its passed around as an i16 including in arguments and return values. This can result in more conversions between arithmetic operations, but is closer to matching the IR the frontend generates for __fp16. And it will allow us to use the chain from constrained arithmetic nodes to link the STRICT_FP_TO_FP16/STRICT_FP16_TO_FP that will need to be added. I've set it up so that each target can opt into the new behavior. Converting all the targets myself was more than I was able to handle.
Differential Revision: https://reviews.llvm.org/D73749
During the review of D73007 Aaron Puchert mentioned
`warn_for_range_variable_always_copy` shouldn't be part of -Wall since
some coding styles require `for(const auto &bar : bars)`. This warning
would cause false positives for these users. Based on Aaron's proposal
refactored the warnings:
* -Wrange-loop-construct warns about possibly unintended constructor
calls. This is part of -Wall. It contains
* warn_for_range_copy: loop variable A of type B creates a copy from
type C
* warn_for_range_const_reference_copy: loop variable A is initialized
with a value of a different type resulting in a copy
* -Wrange-loop-bind-reference warns about misleading use of reference
types. This is not part of -Wall. It contains
* warn_for_range_variable_always_copy: loop variable A is always a copy
because the range of type B does not return a reference
Differential Revision: https://reviews.llvm.org/D73434
Summary:
From `clang-format` version 3.7.0 and up, , there is no way to keep following format of ObjectiveC block:
```
- (void)_aMethod
{
[self.test1 t:self w:self callback:^(typeof(self) self, NSNumber *u, NSNumber *v) {
u = c;
}]
}
```
Regardless of the change in `.clang-format` configuration file, all parameters will be lined up so that colons will be on the same column, like following:
```
- (void)_aMethod
{
[self.test1 t:self
w:self
callback:^(typeof(self) self, NSNumber *u, NSNumber *v) {
u = c;
}]
}
```
Considering with ObjectiveC, the first code style is cleaner & more readable for some people, I've added a config option: `ObjCDontBreakBeforeNestedBlockParam` (boolean) so that if it is enable, the first code style will be favored.
Reviewed By: MyDeveloperDay
Patch By: ghvg1313
Tags: #clang, #clang-format
Differential Revision: https://reviews.llvm.org/D70926
Seen on gcc 8, in release mode & assertions off warnings about logger,
made all statements referencing logger inside LLVM_DEBUG blocks and
ifdef a few variables only used in debug.
This is mechanical fix to get CI green.
This fixes legalizations of global stores > 128-bits. It seems work is
needed on how this split actually occurs. For example, we get the
right code for s160, with an s128 and s32 load, but get 5 s32 loads
for <5 x s32>.
This patch adds initial support for a DemandedElts mask to the internal computeKnownBits/ComputeNumSignBits methods, matching the SelectionDAG and GlobalISel equivalents.
So far only a couple of instructions have been setup to handle the DemandedElts, the remainder still using the existing 'all elements' default. The plan is to extend support as we have test coverage.
Differential Revision: https://reviews.llvm.org/D73435
Driver errors if -fomit-frame-pointer is used together with -pg.
useFramePointerForTargetByDefault() returns true if -pg is specified.
=>
(!OmitFP && useFramePointerForTargetByDefault(Args, Triple)) is true
=>
We cannot get FramePointerKind::None
LanguageRuntime::GetOverrideExprOptions is specific to clang and was
only overridden in RenderScriptRuntime. LanguageRuntime in shouldn't
have any knowledge of clang, so remove it from LanguageRuntime and leave
it only in RenderScriptRuntime.