two fixes with one about error verify-regalloc reported, and
another about live range update of phi after rematerialization.
r265547:
Replace analyzeSiblingValues with new algorithm to fix its compile
time issue. The patch is to solve PR17409 and its duplicates.
analyzeSiblingValues is a N x N complexity algorithm where N is
the number of siblings generated by reg splitting. Although it
causes siginificant compile time issue when N is large, it is also
important for performance since it removes redundent spills and
enables rematerialization.
To solve the compile time issue, the patch removes analyzeSiblingValues
and replaces it with lower cost alternatives containing two parts. The
first part creates a new spill hoisting method in postOptimization of
register allocation. It does spill hoisting at once after all the spills
are generated instead of inside every instance of selectOrSplit. The
second part queries the define expr of the original register for
rematerializaiton and keep it always available during register allocation
even if it is already dead. It deletes those dead instructions only in
postOptimization. With the two parts in the patch, it can remove
analyzeSiblingValues without sacrificing performance.
Patches on top of r265547:
r265610 "Fix the compare-clang diff error introduced by r265547."
r265639 "Fix the sanitizer bootstrap error in r265547."
r265657 "InlineSpiller.cpp: Escap \@ in r265547. [-Wdocumentation]"
Differential Revision: http://reviews.llvm.org/D15302
Differential Revision: http://reviews.llvm.org/D18934
Differential Revision: http://reviews.llvm.org/D18935
Differential Revision: http://reviews.llvm.org/D18936
llvm-svn: 266162
We need to ensure that the address of an undefined weak symbol evaluates to
zero. We were getting this right for non-PIC executables (where the symbol
can be evaluated directly) and for DSOs (where we emit a symbolic relocation
for these symbols, as they are preemptible). But we weren't getting it right
for PIEs. Probably the simplest way to ensure that these symbols evaluate
to zero is by not creating a relocation in .got for them.
Differential Revision: http://reviews.llvm.org/D19044
llvm-svn: 266161
With this patch we use the first scan over the relocations to remember
the information we found about them: will them be relaxed, will a plt be
used, etc.
With that the actual relocation application becomes much simpler. That
is particularly true for the interfaces in Target.h.
This unfortunately means that we now do two passes over relocations for
non SHF_ALLOC sections. I think this can be solved by factoring out the
code that scans a single relocation. It can then be used both as a scan
that record info and for a dedicated direct relocation of non SHF_ALLOC
sections.
I also think it is possible to reduce the number of enum values by
representing a target with just an OutputSection and an offset (which
can be from the start or end).
This should unblock adding features like relocation optimizations.
llvm-svn: 266158
Treat a _Nonnull ivar that is nil as an invariant violation in a similar
fashion to how a nil _Nonnull parameter is treated as a precondition violation.
This avoids warning on defensive returns of nil on defensive internal
checks, such as the following common idiom:
@class InternalImplementation
@interface PublicClass {
InternalImplementation * _Nonnull _internal;
}
-(id _Nonnull)foo;
@end
@implementation PublicClass
-(id _Nonnull)foo {
if (!_internal)
return nil; // no-warning
return [_internal foo];
}
@end
rdar://problem/24485171
llvm-svn: 266157
Initialization of m0 is emitted for each LDS operation, so
every block with LDS usage ends up with one. MachineLICM
used to fail to hoist this out of the loop, so every loop
iteration with LDS usage in it would re-initialize it.
This seems to be fixed now, so add a test to make sure that
it stays this way.
llvm-svn: 266156
Summary:
It seems like this was broken in r252327. I thought we had test cases
for this, but it's really hard to tirgger spills of this exact register
size since they aren't used very much.
Reviewers: arsenm, nhaehnle
Subscribers: nhaehnle, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19021
llvm-svn: 266152
This state is no longer useful and not guaranteed to be valid in later
codegen passes. For example, see the added test, which would print a
savepoint of %bb.-1 without this change, and crashes with a
use-after-free error under ASan if you apply the recycling allocator
patch from llvm.org/PR26808.
llvm-svn: 266150
This bug was introduced with:
http://reviews.llvm.org/rL262269
AVX masked loads are specified to set vector lanes to zero when the high bit of the mask
element for that lane is zero:
"If the mask is 0, the corresponding data element is set to zero in the load form of these
instructions, and unmodified in the store form." --Intel manual
Differential Revision: http://reviews.llvm.org/D19017
llvm-svn: 266148
exiting the for-in loop.
This commit fixes a bug where EmitObjCForCollectionStmt didn't pop
cleanups for captures.
For example, in the following for-in loop, a block which captures self
is passed to foo1:
for (id x in [self foo1:^{ use(self); }]) {
use(x);
break;
}
Previously, the code in EmitObjCForCollectionStmt wouldn't pop the
cleanup for the captured self before exiting the loop, which caused
code-gen to generate an IR in which objc_release was called twice on
the captured self.
This commit fixes the bug by entering a RunCleanupsScope before the
loop condition is evaluated and forcing its cleanup before exiting the
loop.
rdar://problem/16865751
Differential Revision: http://reviews.llvm.org/D18618
llvm-svn: 266147
When run with the multiprocess test runner, the getchar() trick doesn't work, so ninja check-lldb would fail on this test, but running the test directly worked fine.
Differential Revision: http://reviews.llvm.org/D19035
llvm-svn: 266145
(lldb) b ~Foo
(lldb) b Foo::~Foo
(lldb) b Bar::Foo::~Foo
Improved out C++ breakpoint locations tests as well to cover this issue.
<rdar://problem/25577252>
llvm-svn: 266139
Summary:
For correct handling of alias to nameless
function, we need to be able to refer them through a GUID in the summary.
Here we name them using a hash of the non-private global names in the module.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D18883
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266132
Summary:
Let keep llvm-as "dumb": it converts textual IR to bitcode. This
commit removes the dependency from llvm-as to libLLVMAnalysis.
We'll add back summary in llvm-as if we get to a textual
representation for it at some point. In the meantime, opt seems
like a better place for that.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19032
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266131
This fixes two use-after-frees in selectLEA64_32Addr. If matchAddress
matches an ADD with an AND as an operand, and that AND hits one of the
"heroic transforms" that folds masks and shifts, we end up with N
pointing to an SDNode that was deleted. Make sure we're done accessing
it before that.
Found by ASan with the recycling allocator changes in llvm.org/PR26808.
llvm-svn: 266130
Summary:
They correspond to BUFFER_LOAD/STORE_DWORD[_X2,X3,X4] and mostly behave like
llvm.amdgcn.buffer.load/store.format. They will be used by Mesa for SSBO and
atomic counters at least when robust buffer access behavior is desired.
(These instructions perform no format conversion and do buffer range checking
per component.)
As a side effect of sharing patterns with llvm.amdgcn.buffer.store.format,
it has become trivial to add support for the f32 and v2f32 variants of that
intrinsic, so the patch does so.
Also DAG-ify (and fix) some tests that I noticed intermittent failures in
while developing this patch.
Some tests were (temporarily) adjusted for the required mayLoad/hasSideEffects
changes to the BUFFER_STORE_DWORD* instructions. See also
http://reviews.llvm.org/D18291.
Reviewers: arsenm, tstellarAMD, mareko
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D18292
llvm-svn: 266126
Summary:
The function import pass was computing all the imports for all the
modules in the index, and only using the imports for the current module.
Change this to instead compute only for the given module. This means
that the exports list can't be populated, but they weren't being used
anyway.
Longer term, the linker can collect all the imports and export lists
and serialize them out for consumption by the distributed backend
processes which use this pass.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18945
llvm-svn: 266125
FreeBSD uses LLVM's libunwind on FreeBSD/arm64 today (and is expected to
use it more widely in the future), and it requires the EH frame segment
in static binaries.
This is the same as r203742 for NetBSD.
Differential Revision: http://reviews.llvm.org/D19029
llvm-svn: 266123
Clang should pass -backend-option to LLVM even though there is no target machine, since LLVM passes are used when emitting LLVM IR.
Differential Revision: http://reviews.llvm.org/D17552
llvm-svn: 266117
(Recommit of r266002, with r266011, r266016, and not accidentally
including an extra unused/uninitialized element in LibcallRoutineNames)
AtomicExpandPass can now lower atomic load, atomic store, atomicrmw, and
cmpxchg instructions to __atomic_* library calls, when the target
doesn't support atomics of a given size.
This is the first step towards moving all atomic lowering from clang
into llvm. When all is done, the behavior of __sync_* builtins,
__atomic_* builtins, and C11 atomics will be unified.
Previously LLVM would pass everything through to the ISelLowering
code. There, unsupported atomic instructions would turn into __sync_*
library calls. Because of that behavior, Clang currently avoids emitting
llvm IR atomic instructions when this would happen, and emits __atomic_*
library functions itself, in the frontend.
This change makes LLVM able to emit __atomic_* libcalls, and thus will
eventually allow clang to depend on LLVM to do the right thing.
It is advantageous to do the new lowering to atomic libcalls in
AtomicExpandPass, before ISel time, because it's important that all
atomic operations for a given size either lower to __atomic_*
libcalls (which may use locks), or native instructions which won't. No
mixing and matching.
At the moment, this code is enabled only for SPARC, as a
demonstration. The next commit will expand support to all of the other
targets.
Differential Revision: http://reviews.llvm.org/D18200
llvm-svn: 266115
and we fall back to textual inclusion, don't require the module as a whole to
be marked available; it's OK if some other file in the same module is missing,
just as it would be if the header were explicitly marked textual.
llvm-svn: 266113