destination module, but one of them isn't used in the destination module. If
another module comes along and the uses the unused type, there could be type
conflicts when the modules are finally linked together. (This happened when
building LLVM.)
The test that was reduced is:
Module A:
%Z = type { %A }
%A = type { %B.1, [7 x x86_fp80] }
%B.1 = type { %C }
%C = type { i8* }
declare void @func_x(%C*, i64, i64)
declare void @func_z(%Z* nocapture)
Module B:
%B = type { %C.1 }
%C.1 = type { i8* }
%A.2 = type { %B.3, [5 x x86_fp80] }
%B.3 = type { %C.1 }
define void @func_z() {
%x = alloca %A.2, align 16
%y = getelementptr inbounds %A.2* %x, i64 0, i32 0, i32 0
call void @func_x(%C.1* %y, i64 37, i64 927) nounwind
ret void
}
declare void @func_x(%C.1*, i64, i64)
declare void @func_y(%B* nocapture)
(Unfortunately, this test doesn't fail under llvm-link, only during an LTO
linking.) The '%C' and '%C.1' clash. The destination module gets the '%C'
declaration. When merging Module B, it looks at the '%C.1' subtype of the '%B'
structure. It adds that in, because that's cool. And when '%B.3' is processed,
it uses the '%C.1'. But the '%B' has used '%C' and we prefer to use '%C'. So the
'@func_x' type is changed to 'void (%C*, i64, i64)', but the type of '%x' in
'@func_z' remains '%A.2'. The GEP resolves to a '%C.1', which conflicts with the
'@func_x' signature.
We can resolve this situation by making sure that the type is used in the
destination before saying that it should be used in the module being merged in.
With this fix, LLVM and Clang both compile under LTO.
<rdar://problem/10913281>
llvm-svn: 153351
same basic block, and it's not safe to insert code in the successor
blocks if the edges are critical edges. Splitting those edges is
possible, but undesirable, especially on the unwind side. Instead,
make the bottom-up code motion to consider invokes to be part of
their successor blocks, rather than part of their parent blocks, so
that it doesn't push code past them and onto the edges. This fixes
PR12307.
llvm-svn: 153343
This is necessary if the client wants to be able to mutate TargetOptions (for example, fast FP math mode) after the initial creation of the ExecutionEngine.
llvm-svn: 153342
dominated by Root, check that B is available throughout the scope. This
is obviously true (famous last words?) given the current logic, but the
check may be helpful if more complicated reasoning is added one day.
llvm-svn: 153323
(and hopefully on Windows). The bots have been down most of the day
because of this, and it's not clear to me what all will be required to
fix it.
The commits started with r153205, then r153207, r153208, and r153221.
The first commit seems to be the real culprit, but I couldn't revert
a smaller number of patches.
When resubmitting, r153207 and r153208 should be folded into r153205,
they were simple build fixes.
llvm-svn: 153241
execution-time regression for nsieve-bits on the ARMv7 -O0 -g nightly tester.
This may also improve compile-time on architectures that would otherwise
generate a libcall for urem (e.g., ARM) or fall back to the DAG selector.
rdar://10810716
llvm-svn: 153230
This is in braces so that it doesn't conflict with the existing %p.
It uses braces instead of parens because parens would have to be
regex-escaped.
llvm-svn: 153213
1. Declare a virtual function getPointerToNamedFunction() in JITMemoryManager
2. Move the implementation of getPointerToNamedFunction() form JIT/MCJIT to DefaultJITMemoryManager.
llvm-svn: 153205
Type legalization can zero-extend the elements of the build_vector node, so,
for example, we may have an <8 x i8> with i32 elements of value 255. That
should return 'true' for the vector being all ones.
llvm-svn: 153203
not attched to a basic block or function. There are conservatively
correct answers in these cases, and this makes the analysis more useful
in contexts where we have a partially formed bit of IR.
I don't have any way to test this directly... suggestions welcome here,
but I'm not seeing anything sadly. I only found this using a subsequent
patch to the inliner which runs instsimplify on partially inlined
instructions, and even then only on a quite large program. I never got
a reasonable testcase out of it, and anything I do get is likely to be
quite fragile due to requiring an interaction of two different passes,
and the only result being a segfault if it goes wrong.
llvm-svn: 153176
Adds /usr/lib/debug early to list, as some systems (debian) have unstripped libs in there
Adds /lib/i386-linux-gnu for systems that does multiarch (debian)
llvm-svn: 153174
We can simply confirm the handle released to open it with EXCLUSIVE. Attempting renaming was bad.
Disable win32file at ImportError. Thanks to Francois to let me know.
FIXME: Could we report warning or notification if win32file were not found?
llvm-svn: 153172
Remaining "uncategorized" functions have been organized into their
proper place in the hierarchy. Some functions were moved around so
groups are defined together.
No code changes were made.
llvm-svn: 153169
This gives a lot of love to the docs for the C API. Like Clang's
documentation, the C API is now organized into a Doxygen "module"
(LLVMC). Each C header file is a child of the main module. Some modules
(like Core) have a hierarchy of there own. The produced documentation is
thus better organized (before everything was in one monolithic list).
This patch also includes a lot of new documentation for APIs in Core.h.
It doesn't document them all, but is better than none. Function docs are
missing @param and @return annotation, but the documentation body now
commonly provides help details (like the expected llvm::Value sub-type
to expect).
llvm-svn: 153157
These changes allow us to compile big endian from the command line for 32 bit
Mips targets. This patch will result in code and data actually being produced
in the correct endianess.
llvm-svn: 153153
ImmutAVLTree uses random unsigned values as keys into a DenseMap,
which could possibly happen to be the same value as the Tombstone or
Entry keys in the DenseMap.
Test case is hard to come up with. We randomly get failures on the
internal static analyzer bot, which most likely hits this issue
(hard to be 100% sure without the full stack).
llvm-svn: 153148
relocations (i.e., pieces of data whose addresses
are referred to elsewhere in the binary image) and
update the references when the section containing
the relocations moves. The way this works is that
there is a map from section IDs to lists of
relocations.
Because the relocations are associated with the
section containing the data being referred to, they
are updated only when the target moves. However,
many data references are relative and also depend
on the location of the referrer.
To solve this problem, I introduced a new data
structure, Referrer, which simply contains the
section being referred to and the index of the
relocation in that section. These referrers are
associated with the source containing the
reference that needs to be updated, so now
regardless of which end of the relocation moves,
the relocation will now be updated correctly.
llvm-svn: 153147
Do not call SplitBlockPredecessors on a loop preheader when one of the
predecessors is an indirectbr. Otherwise, you will hit this assert:
!isa<IndirectBrInst>(Preds[i]->getTerminator()) && "Cannot split an edge from an IndirectBrInst"
llvm-svn: 153134
instead of skipping the current loop.
My prior fix was incomplete because of an overzealous compile-time optimization:
Better fix for: <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce
llvm-svn: 153131
ARMBaseRegisterInfo::canRealignStack was checking for variable-sized objects
but not for stack adjustments around calls. Use hasReservedCallFrame() to
check for both. The hasBasePointer function was already correctly checking
both conditions, so the effect of this was that a base pointer would be used
without checking whether the base pointer register could be reserved. I don't
have a small testcase for this.
<rdar://problem/11075906>
llvm-svn: 153110
This results in things such as
vmovups 16(%rdi), %xmm0
vinsertf128 $1, %xmm0, %ymm0, %ymm0
to be combined to
vinsertf128 $1, 16(%rdi), %ymm0, %ymm0
rdar://11076953
llvm-svn: 153092
i128). In that case, we may not be able to print out the MCExpr as an
expression. For instance, we could have an MCExpr like this:
0xBEEF0000BEEF0000 | (0xBEEF0000BEEF0000 << 64)
The MCExpr printer handles sizes up to 64-bits, but this expression would
require 128-bits. In this situation, try to evaluate the constant expression and
emit that as the value into 64-bit chunks.
<rdar://problem/11070338>
llvm-svn: 153081
a variable. The previous code would break the debug info changing
code invariant. This will regress debug info for arguments where
we elide the alloca created.
Fixes rdar://11066468
llvm-svn: 153074
1) opt is not usually in the same path as the target program. Even for
the bugpoint as a standalone app, it should be more portable to search
in PATH, isn't it?
2) bugpoint driver accounts opt plugins, but does not list them in the
final output command.
Patch by Dmitry Mikushin!
llvm-svn: 153066
X86InstrCompiler.td.
It also adds –mcpu-generic to the legalize-shift-64.ll test so the test
will pass if run on an Intel Atom CPU, which would otherwise
produce an instruction schedule which differs from that which the test expects.
llvm-svn: 153033
violations I introduced. Also sort some of the instructions to get
a more consistent ordering.
Suggestions on still better / more consistent formatting would be
welcome. I'm actually tempted to use a macro to define all of the
delegate methods...
llvm-svn: 153030
fast-isel before emitting code. If the program bails after code was emitted,
then it could lead to the stack being adjusted more than once (two
CALLSEQ_BEGINs emitted) but being adjuste back only once after the call. This
leads to general badness and gnashing of teeth.
<rdar://problem/11050630>
llvm-svn: 152959
It's not a good style idea, as the registers will be laid down in memory in
numerical order, not the order they're in the list, but it's legal. vldm/vstm
are stricter.
rdar://11064740
llvm-svn: 152943
In previous case,
RUN: foo -o %t
RUN: FileCheck < %t
RUN: bar -o %t
2nd read handle might prevent manipulation of 3rd %t in bar, to remove and rename.
llvm-svn: 152916
alignment. If that's the case, then we want to make sure that we don't increase
the alignment of the store instruction. Because if we increase it to be "more
aligned" than the pointer, code-gen may use instructions which require a greater
alignment than the pointer guarantees.
<rdar://problem/11043589>
llvm-svn: 152907
It was added in 2007 as the first cut at supporting no-inline
attributes, but we didn't have function attributes of any form at the
time. However, it was added without any mention in the LangRef or other
documentation.
Later on, in 2008, Devang added function notes for 'inline=never' and
then turned them into proper function attributes. From that point
onward, as far as I can tell, the world moved on, and no one has touched
'llvm.noinline' in any meaningful way since.
It's time has now come. We have had better mechanisms for doing this for
a long time, all the frontends I'm aware of use them, and this is just
holding back progress. Given that it was never a documented feature of
the IR, I've provided no auto-upgrade support. If people know of real,
in-the-wild bitcode that relies on this, yell at me and I'll add it, but
I *seriously* doubt anyone cares.
llvm-svn: 152904
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.
Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.
The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.
The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.
This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.
I believe there is no functional change here, but yell if you spot one.
None are intended.
Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.
llvm-svn: 152903
analysis implementation. The header was already separated. Also cleanup
all the comments in the header to follow a nice modern doxygen form.
There is still plenty of cruft here, but some of that will fall out in
subsequent refactorings and this was an easy step in the right
direction. No functionality changed here.
llvm-svn: 152898
These edges are not really necessary, but it is consistent with the
way we currently create physreg edges. Scheduler heuristics that
expect a DAG edge to the block terminator could benefit from this
change. Although in the future I hope we have a better mechanism for
modeling latency across scheduling regions.
llvm-svn: 152895
Only record IVUsers that are dominated by simplified loop
headers. Otherwise SCEVExpander will crash while looking for a
preheader.
I previously tried to work around this in LSR itself, but that was
insufficient. This way, LSR can continue to run if some uses are not
in simple loops, as long as we don't attempt to analyze those users.
Fixes <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce
llvm-svn: 152892
on our internal nightly testers. So, basically revert r152486 again.
Abbreviated original commit message:
Implement a more intelligent way of spilling uses across an invoke boundary.
It looks as if Chander's inlining work, r152737, exposed an issue.
llvm-svn: 152887
It caused MSP430DAGToDAGISel::SelectIndexedBinOp() to be miscompiled.
When two ReplaceUses()'s are expanded as inline, vtable in base class is stored to latter (ISelUpdater)ISU.
llvm-svn: 152877
theoretical fix since it only matters for types with >= 2^63 bits (!) and also
only matters if pointers have more than 64 bits, which is not supported anyway.
llvm-svn: 152831
We cannot limit the concatenated instruction names to 64K. ARM is
already at 32K, and it is easy to imagine a target with more
instructions.
llvm-svn: 152817
This patch limited the concatenated register names to 64K which meant
that the total number of registers was many times less than 64K.
If any compilers actually enforce the 64K limit on string literals, and
it turns out to be a problem, we should fix that problem by not using
long string literals.
llvm-svn: 152816
This needs a test, but it will take some time to figure
out the best way to get an input that will produce > 2^16 relocs.
Patch by Graydon Hoare!
llvm-svn: 152787
out the DW_AT_name. Older gdbs unfortunately still use it to
disambiguate member functions in templated classes (gdb.cp/templates.exp).
rdar://11043421 (which is now deferred for a bit)
llvm-svn: 152782
This results in things such as
vmovaps -96(%rbx), %xmm1
vinsertf128 $1, %xmm1, %ymm0, %ymm0
to be combined to
vinsertf128 $1, -96(%rbx), %ymm0, %ymm0
rdar://10643481
llvm-svn: 152762
correlated pairs of pointer arguments at the callsite. This is designed
to recognize the common C++ idiom of begin/end pointer pairs when the
end pointer is a constant offset from the begin pointer. With the
C-based idiom of a pointer and size, the inline cost saw the constant
size calculation, and this provides the same level of information for
begin/end pairs.
In order to propagate this information we have to search for candidate
operations on a pair of pointer function arguments (or derived from
them) which would be simplified if the pointers had a known constant
offset. Then the callsite analysis looks for such pointer pairs in the
argument list, and applies the appropriate bonus.
This helps LLVM detect that half of bounds-checked STL algorithms
(such as hash_combine_range, and some hybrid sort implementations)
disappear when inlined with a constant size input. However, it's not
a complete fix due the inaccuracy of our cost metric for constants in
general. I'm looking into that next.
Benchmarks showed no significant code size change, and very minor
performance changes. However, specific code such as hashing is showing
significantly cleaner inlining decisions.
llvm-svn: 152752
Commit r152704 exposed a latent MSVC limitation (aka bug).
Both ilist and and iplist contains the same function:
template<class InIt> void insert(iterator where, InIt first, InIt last) {
for (; first != last; ++first) insert(where, *first);
}
Also ilist inherits from iplist and ilist contains a "using iplist<NodeTy>::insert".
MSVC doesn't know which one to pick and complain with an error.
I think it is safe to delete ilist::insert since it is redundant anyway.
llvm-svn: 152746
which are small enough to themselves be inlined. Delaying in this manner
can be harmful if the function is inelligible for inlining in some (or
many) contexts as it pessimizes the code of the function itself in the
event that inlining does not eventually happen.
Previously the check was written to only do this delaying of inlining
for static functions in the hope that they could be entirely deleted and
in the knowledge that all callers of static functions will have the
opportunity to inline if it is in fact profitable. However, with C++ we
get two other important sources of functions where the definition is
always available for inlining: inline functions and templated functions.
This patch generalizes the inliner to allow linkonce-ODR (the linkage
such C++ routines receive) to also qualify for this delay-based
inlining.
Benchmarking across a range of large real-world applications shows
roughly 2% size increase across the board, but an average speedup of
about 0.5%. Some benhcmarks improved over 2%, and the 'clang' binary
itself (when bootstrapped with this feature) shows a 1% -O0 performance
improvement when run over all Sema, Lex, and Parse source code smashed
into a single file. A clean re-build of Clang+LLVM with a bootstrapped
Clang shows approximately 2% improvement, but that measurement is often
noisy.
llvm-svn: 152737
There were cases where a value could be used and it's both crossing an invoke
and NOT crossing an invoke. This could happen in the landing pads. In that case,
we will demote the value to the stack like we did before.
<rdar://problem/10609139>
llvm-svn: 152705
New flags: -misched-topdown, -misched-bottomup. They can be used with
the default scheduler or with -misched=shuffle. Without either
topdown/bottomup flag -misched=shuffle now alternates scheduling
direction.
LiveIntervals update is unimplemented with bottom-up scheduling, so
only -misched-topdown currently works.
Capped the ScheduleDAG hierarchy with a concrete ScheduleDAGMI class.
ScheduleDAGMI is aware of the top and bottom of the unscheduled zone
within the current region. Scheduling policy can be plugged into
the ScheduleDAGMI driver by implementing MachineSchedStrategy.
ConvergingScheduler is now the default scheduling algorithm.
It exercises the new driver but still does no reordering.
llvm-svn: 152700
output (we're emitting a specification already and the information
isn't changing).
Saves 1% on the debug information for a build of llvm.
Fixes rdar://11043421
llvm-svn: 152697
(i16 load $addr+c*sizeof(i16)) and replace uses of (i32 vextract) with the
i16 load. It should issue an extload instead: (i32 extload $addr+c*sizeof(i16)).
rdar://11035895
llvm-svn: 152675
take a TargetLibraryInfo parameter. Internally, rather than passing TD, TLI
and DT parameters around all over the place, introduce a struct for holding
them.
llvm-svn: 152623
Also refactor the existing OProfile profiling code to reuse the same interfaces with the VTune profiling code.
In addition, unit tests for the profiling interfaces were added.
This patch was prepared by Andrew Kaylor and Daniel Malea, and reviewed in the llvm-commits list by Jim Grosbach
llvm-svn: 152620
offset accumulation to use a boring APInt instead of ConstantExprs.
I didn't go all the way to an 'int64_t' because I wanted APInt to handle
any magic required to properly wrap the arithmetic when the pointer
width is <64 bits. If there is a significant penalty from using APInt
here, first off WTF, and secondly let me know and I'll do the math by
hand.
I've left one layer still operating w/ ConstantExpr because it makes the
interface quite a bit simpler, and that one isn't iterative so has much
lower cost.
I suppose this may potentially speed up some strang compilation
situations, but I don't really expect much. It should have no functional
impact either way.
llvm-svn: 152590