Eliminates some more cases where some subset of the addressing
computation remains flat. Some cases with addrspacecasts
in nested constant expressions are still left behind however.
llvm-svn: 301704
When a PHI operand has a subregister, create a COPY instead of simply
replacing the PHI output with the input it.
Differential Revision: https://reviews.llvm.org/D32650
llvm-svn: 301699
I used Null rather than Zero to match the getNullValue method name.
There are some other places outside APInt where isNullValue would be more readable than isMinValue even though they do the same thing. I'll update those in future patches.
llvm-svn: 301695
This change was motivated by output from lld-link.exe and link.exe
getting intermixed. There's already a flush() call in message(), so
there's precedence.
llvm-svn: 301693
While debugging a miscompile I realized loopunswitch doesn't
put newlines when printing the instruction being replacement.
Ending up with a single line with many instruction replaced isn't
the best for readability and/or mental sanity.
llvm-svn: 301692
Also, add test for data relocations and fix addend to
be signed.
Subscribers: jfb, dschuff
Differential Revision: https://reviews.llvm.org/D32513
llvm-svn: 301690
This makes it easier to read and possibly even modify the test cases, as there
is no need to keep the variable increment in steps of one. More importantly, by
using explicit variable names we do not need to rely on the implicit numbering
of statements when dumping the scop information.
This makes it easier to read and possibly even modify the test cases.
Furthermore, by using explicit variables we do not need to rely on the implicit
numbering of statements when dumping the scop information. In a future commit,
this implicit numbering will likely not be used any more to refer to LLVM-IR
values as it is very expensive to construct.
llvm-svn: 301689
The IntrNoMem, IntrReadMem, IntrWriteMem, and IntrArgMemOnly intrinsic
properties differ from their corresponding LLVM IR attributes by specifying
that the intrinsic, in addition to its memory properties, has no other side
effects.
The IntrHasSideEffects flag used in combination with one of the memory flags
listed above, makes it possible to define an intrinsic such that its
properties at the CodeGen layer match its properties at the IR layer.
Patch by Tom Stellard
llvm-svn: 301685
Since -gsplit-dwarf is specified on a backend compile (in ThinLTO
parlance) it isn't passed during the frontend compile (because no ELF
object/dwo file is produced then), yet the -fno-split-dwarf-inlining
value needs to be encoded in the LLVM DebugInfo metadata to have
effect...
So let it be specified & it'll be silently ignored if -gsplit-dwarf
isn't used in the end, otherwise it'll be used on a per-cu basis
depending on where it's specified in the frontend compile actions.
llvm-svn: 301684
Previously, we printed out input sections and input files in
separate columns as shown below.
Address Size Align Out In File Symbol
0000000000201000 0000000000000015 4 .text
0000000000201000 000000000000000e 4 .text
0000000000201000 000000000000000e 4 foo.o
0000000000201000 0000000000000000 0 _start
0000000000201005 0000000000000000 0 f(int)
000000000020100e 0000000000000000 0 local
0000000000201010 0000000000000002 4 bar.o
0000000000201010 0000000000000000 0 foo
0000000000201011 0000000000000000 0 bar
This format doesn't make much sense because for each input section,
there's always exactly one input file. This patch changes the format
to this.
Address Size Align Out In Symbol
0000000000201000 0000000000000015 4 .text
0000000000201000 000000000000000e 4 foo.o:(.text)
0000000000201000 0000000000000000 0 _start
0000000000201005 0000000000000000 0 f(int)
000000000020100e 0000000000000000 0 local
0000000000201010 0000000000000002 4 bar.o:(.text)
0000000000201010 0000000000000000 0 foo
0000000000201011 0000000000000000 0 bar
Differential Revision: https://reviews.llvm.org/D32657
llvm-svn: 301683
The method is called "get *Param* Alignment", and is only used for
return values exactly once, so it should take argument indices, not
attribute indices.
Avoids confusing code like:
IsSwiftError = CS->paramHasAttr(ArgIdx, Attribute::SwiftError);
Alignment = CS->getParamAlignment(ArgIdx + 1);
Add getRetAlignment to handle the one case in Value.cpp that wants the
return value alignment.
This is a potentially breaking change for out-of-tree backends that do
their own call lowering.
llvm-svn: 301682
Adds a new method finalizeLowering to TargetLoweringBase. This is in
preparation for an upcoming commit.
This function is meant for target specific adjustments to
MachineFrameInfo or register reservations.
Move the freezeRegisters() and the hasCopyImplyingStackAdjustment()
handling into the new function to prove the concept. As an added bonus
GlobalISel no longer missed the hasCopyImplyingStackAdjustment()
handling with this.
Differential Revision: https://reviews.llvm.org/D32621
llvm-svn: 301679
This patch replaces flush with a last ditch attempt at synchronizing
the section list with the linker script "AST".
The synchronization is a bit of a hack and should in time be avoided
by creating the AST earlier so that modifications can be made directly
to it instead of modifying the section list and synchronizing it back.
This is the main step for fixing
https://bugs.llvm.org/show_bug.cgi?id=32816. With this in place I
think the only missing thing would be to have processCommands assign
section indexes as dummy offsets so that the sort in
OutputSection::finalize works.
With this LinkerScript::assignAddresses becomes much simpler, which
should help with the thunk work.
llvm-svn: 301678
Summary:
When using ThinLTO, the linker performs its own parallelism. This
change limits the number of parallel link jobs that Ninja will issue
to keep the total number of threads reasonable when linking with
ThinLTO.
Reviewers: hans, ruiu
Subscribers: mgorny, mehdi_amini, Prazek
Differential Revision: https://reviews.llvm.org/D31990
llvm-svn: 301676
As has been reported in the previous commit, codegen verification can result in
quadratic compile time increases for large functions with many scops. This is
certainly not something we would like to have in the Polly default
configuration. Hence, we disable codegen verification by default -- also to see
if this resolves some of the compilation timeouts we currently see on the AOSP
buildbots. We still leave this feature in Polly as it has shown _very_ useful
for debugging. In fact, we may want to have a discussion if we can bring this
feature back in a way that does not impact compilation time so much.
Thanks to Eli Friedman <efriedma@codeaurora.org> for reporting this issue and
for providing the test case in the previous commit (where I forgot to
acknowledge him).
llvm-svn: 301670
Before this change, we always tried to verify the function and printed
verification errors, but just did not abort in case -polly-codegen-verify=false
was set and verification failed. As verification can become very cosly -- for
large functions with many scops we may verify the very same function very often
-- this can affect compile time very negatively. Hence, we respect the
-polly-codegen-verify flag with this check, ensuring that no verification is run
if -polly-codegen-verify=false.
This reduces code generation time from 26 seconds to 4 seconds on the test
case below with -polly-codegen-verify=false:
struct X { int x; };
void a();
#define SIG (int x, X **y, X **z)
typedef void (*fn)SIG;
#define FN { for (int i = 0; i < x; ++i) { (*y)[i].x += (*z)[i].x; } a(); }
#define FN5 FN FN FN FN FN
#define FN25 FN5 FN5 FN5 FN5
#define FN125 FN25 FN25 FN25 FN25 FN25
#define FN250 FN125 FN125
#define FN1250 FN250 FN250 FN250 FN250 FN250
void x SIG { FN1250 }
llvm-svn: 301669
creation that are const-qualified.
When a block captures an ObjC object pointer, clang retains the pointer
to prevent prematurely destroying the object the pointer points to
before the block is called or copied.
When the captured object pointer is const-qualified, we can avoid
emitting the retain/release pair since the pointer variable cannot be
modified in the scope in which the block literal is introduced.
For example:
void test(const id x) {
callee(^{ (void)x; });
}
This patch implements that optimization.
rdar://problem/28894510
Differential Revision: https://reviews.llvm.org/D32601
llvm-svn: 301667
This eliminates many extra 'Idx' induction variables in loops over
arguments in CodeGen/ and Target/. It also reduces the number of places
where we assume that ReturnIndex is 0 and that we should add one to
argument numbers to get the corresponding attribute list index.
NFC
llvm-svn: 301666
This became no longer necessary after D19462 landed, and will be incompatible
with an upcoming change to the summary data structures that changes how we
represent references.
llvm-svn: 301660
We found that some part of code for the -Map option takes O(m*n)
where m is the number of input sections in some file and n is
the number of symbols in the same file. If you do LTO, we usually
have only a few object files as inputs for the -Map option
feature, so this performance characteristic was worse than I
expected.
This patch rewrites the -Map option feature to speed it up.
I eliminated the O(m*n) bottleneck and also used multi-threading.
As a result, clang link time with the -Map option improved from
18.7 seconds to 11.2 seconds. Without -Map, it takes 7.7 seconds,
so the -Map option is now about 3x faster than before for this
test case (from 11.0 seconds to 3.5 seconds.) The generated output
file size was 223 MiB, and the file contains 1.2M lines.
Differential Revision: https://reviews.llvm.org/D32631
llvm-svn: 301659
CONSTANT imports expect both the `_imp_` prefixed and non-prefixed
symbols should be added to the symbol table. This allows for linking
symbols like _NSConcreteGlobalBlock in WinObjC. The previous change
would generate the import library properly by handling the option but
would not consume the generated entry properly.
llvm-svn: 301657