When comparing two COMDAT sections, we need to take section values
and associative sections into account. This patch fixes that bug.
It fixes a crash bug of llvm-tblgen when linked with /opt:lldicf.
One thing I don't understand yet is that this logic seems to be
too strict. MSVC linker is able to create more compact executables
(which of course work correctly). With this ICF algorithm, LLD is
able to make executable smaller, but the outputs are larger than
MSVC's. There must be something I'm missing here.
llvm-svn: 240897
Format @autoreleasepool properly for the Attach brace style
by recognizing @autoreleasepool as a block introducer.
Patch from Strager Neds!
http://reviews.llvm.org/D10372
llvm-svn: 240896
We had a hack in SDAGBuilder in place to work around this but now we
can avoid that. Call BuildExactSDIV from BuildSDIV so DAGCombiner can
perform this trick automatically.
The added check in DAGCombiner is necessary to prevent exact sdiv by pow2
from regressing as the target-specific pow2 lowering is not aware of
exact bits yet.
This is mostly covered by existing tests. One side effect is that we
get the better lowering for exact vector sdivs now too :)
llvm-svn: 240891
the DW_AT_bit_offset computation, the byte offset is in fact also
endian-dependent as it needs to point to the storage unit containing the
most-significant bit of the the bitfield.
I'm so looking forward to emitting the endian-agnostic DWARF 3 version
instead.
llvm-svn: 240890
Summary:
Previously it (incorrectly) used GPR's.
Patch by Simon Dardis. A couple small corrections by myself.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10567
llvm-svn: 240883
ctypes 0.3 and earlier contains an interface-definig bug:
its ptr_of_raw_address accepts Int64 and not Nativeint. ctypes 0.4
was not released during the 3.6 cycle, and because of that, LLVM 3.6
was released with ctypes 0.3 as a dependency, which now breaks
the build on modern ctypes.
Unbreak.
llvm-svn: 240882
Summary:
On PPC64, half the msan tests fail with an infinite recursion through
GetStackTrace like this:
#0 __msan::GetStackTrace
#1 __msan_memcpy
#2 ?? () from /lib64/libgcc_s.so.1
#3 ?? () from /lib64/libgcc_s.so.1
#4 _Unwind_Backtrace
#5 __sanitizer::BufferedStackTrace::SlowUnwindStack
#6 __sanitizer::BufferedStackTrace::Unwind
#7 __msan::GetStackTrace
#8 __interceptor_calloc
#9 _dl_allocate_tls
#10 pthread_create@@GLIBC_2.17
#11 __interceptor_pthread_create
#12 main
The problem is that we call _Unwind_Backtrace to get a stack trace; but
_Unwind_Backtrace calls memcpy, which we intercept and try to get
another stack trace.
This patch fixes it in __msan_memcpy by skipping the stack trace if
IsInSymbolizer(). This works because GetStackTrace already creates a
SymbolizerScope to "block reports from our interceptors during
_Unwind_Backtrace".
Reviewers: samsonov, wschmidt, eugenis
Reviewed By: eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10762
llvm-svn: 240878
If we are dealing with a pointer induction variable, isInductionPHI
gives back a step value of Stride / size of pointer. However, we might
be indexing with a legal type wider than the pointer width.
Handle this by inserting casts where appropriate instead of crashing.
This fixes PR23954.
llvm-svn: 240877
The PruneEH pass tries to annotate functions as 'noreturn' if it doesn't
see a ReturnInst. However, a naked function containing inline assembly
can contain control flow leaving the function.
This fixes PR23971.
llvm-svn: 240876
This case had been failing on testers that didn't have x86 support. Rather
than XFAIL it on testers without x86 support, I've just assembled it and used
the raw object as the test input.
llvm-svn: 240875
This function is actually *very* hot. It is hard to see currently
because the call graph is very recursive, but I'm working to remove that
and when I do this function becomes significantly higher on the profile
(up to 5%!) and so worth avoiding the call overhead.
No specific perf gain I can measure yet (below the noise), but likely to
have more impact as we stop cluttering the call graph.
Differential Revision: http://reviews.llvm.org/D10788
llvm-svn: 240873
StringRefs. This uses the LLVM hashing rather than the standard library
and a closed addressed hash table rather than chaining.
This improves the Windows self-link of LLD by 4.4% (averaged over 10
runs, with well under 1% of variance on each).
There is still some room to improve here. Two things I clearly see in
the profile:
1) This is one of the biggest stress tests for the LLVM hashing code. It
actually consumes something like 3-4% of the link time after the
change.
2) The way that StringRef keys are handled in the DenseMap interface is
pretty suboptimal. We pay the price of checking for empty and
tombstone keys when we could only possibly be looking for a normal
key. But fixing this requires invasive API changes.
So there is still some headroom here.
Differential Revision: http://reviews.llvm.org/D10684
llvm-svn: 240871
Summary:
The current implementation doesn't always flush all pending labels
beforeemitting data which can result in an incorrectly placed labels in
case when when instruction bundling is enabled and -mc-relax-all flag is
being used. To address this issue, we always flush pending labels before
emitting data.
The change was tested by running PNaCl toolchain trybots with
-mc-relax-all flag set.
Fixes https://code.google.com/p/nativeclient/issues/detail?id=4063
Test Plan: Regression test attached
Reviewers: mseaborn
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D10325
llvm-svn: 240870
Summary:
Ensure that fragments are bundle aligned when instruction bundling
is enabled and the -mc-relax-all flag is set. This is implicitly
assumed by the bundle padding implementation but this assumption
does not hold when custom alignment is being used.
The change was tested by running PNaCl toolchain trybots with
-mc-relax-all flag set.
Fixes https://code.google.com/p/nativeclient/issues/detail?id=4063
Test Plan: Regression test attached
Reviewers: mseaborn
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D10044
llvm-svn: 240869
There are two main reasons why a linked-list makes sense for
`DIEValueList`.
1. We want `DIE` to be on a `BumpPtrAllocator` to improve teardown
efficiency. Making `DIEValueList` array-based would make that much
more complicated.
2. The singly-linked list is fairly memory efficient. The histogram
[1] shows that most DIEs have relatively few values, so we often pay
less than the 2/3-pointer static overhead of a vector. Furthermore,
we don't know ahead of time exactly how many values a `DIE` needs,
so a vector-like scheme will on average over-allocate by ~50%. As
it happens, that's the same memory overhead as the linked list node.
[1]: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-May/085910.html
The comment I added to the code is a little more succinct, but I think
it's enough to give the idea.
llvm-svn: 240868
Allow callers of `Value::print()` and `Metadata::print()` to pass in a
`ModuleSlotTracker`. This allows them to pay only once for calculating
module-level slots (such as Metadata).
This is related to PR23865, where there was a huge cost for
`MachineFunction::print()`. Although I don't have a *particular* user
in mind for this new code, I have hit big slowdowns before when running
`opt -debug`, and I think this will be useful. Going forward, if
someone hits a big slowdown with `print()` statements, they can create a
`ModuleSlotTracker` and send it through. Similarly, adding support to
`Value::dump()` and `Metadata::dump()` should be trivial.
I added unit tests to be sure the `print()` functions actually behave
the same way with and without the slot tracker.
llvm-svn: 240867
It is possible for a global to be substituted with another global of a
different type or a different kind (i.e. an alias) at IR link time. One
example of this scenario is when a Microsoft ABI vtable is substituted with
an alias referring to a larger vtable containing an RTTI reference.
This will cause the global to be RAUW'd with a possibly bitcasted reference
to the other global. This will of course also affect any references to the
global in bitset metadata.
The right way to handle such metadata is simply to ignore it. This is sound
because the linked module should contain another copy of the bitset entries as
applied to the new global.
llvm-svn: 240866
The parser provides a convenient interface for reading llvm stackmap v1 sections
in object files.
This patch also includes a new option for llvm-readobj, '-stackmap', which uses
the parser to pretty-print stackmap sections for debugging/testing purposes.
llvm-svn: 240860
Another follow-up related to r240848: try a little harder to share slot
tracking calculations within a single `MachineInstr` dump. This is
unrelated to `MachineFunction::print()`, since that should be passing
through the function's `ModuleSlotTracker` by now, but could affect the
speed of dumping from a debugger if there is more than one IR-level
operand.
llvm-svn: 240852
This commit serializes the global address machine operands.
This commit doesn't serialize the operand's offset and target
flags, it serializes only the global value reference.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10671
llvm-svn: 240851
This change extends the detection of base pointers for vector constructs to handle arbitrary phi and select nodes. The existing non-vector code already handles those, so this is basically just extending the vector special case to be less special cased. It still isn't generalized vector handling since we can't handle arbitrary vector instructions (e.g. shufflevectors), but it's a lot closer.
The general structure of the change is as follows:
* Extend the base defining value relation over a subset of vector instructions and vector typed phi & select instructions.
* Move scalarization from before base pointer rewriting to after base pointer rewriting. The extension of the BDV relation is sufficient to find vector base phis for vector inputs.
* Preserve the existing special case logic for when the base of a vector element is locally obvious. This general idea could be extended to the scalar case as well.
Differential Revision: http://reviews.llvm.org/D10461#inline-84275
llvm-svn: 240850
Summary:
Some front ends make kernel pointers global already. In that case,
handlePointerParams does nothing.
Test Plan: more tests in lower-kernel-ptr-arg.ll
Reviewers: grosser
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10779
llvm-svn: 240849
For another 1% speedup on the testcase in PR23865, push the
`ModuleSlotTracker` through to metadata-related printing in
`MachineBasicBlock::print()`.
llvm-svn: 240848