Commit Graph

34718 Commits

Author SHA1 Message Date
Shiva Chen f5938bfbf9 Revert "[DebugInfo] Generate DWARF debug information for labels."
This reverts commit b454fa1b4079b6c0a5b1565982d16516385838d7.

llvm-svn: 337812
2018-07-24 06:17:45 +00:00
Shiva Chen d6b2cdf9d4 [DebugInfo] Generate DWARF debug information for labels.
There are two forms for label debug information in DWARF format.

1. Labels in a non-inlined function:

DW_TAG_label
  DW_AT_name
  DW_AT_decl_file
  DW_AT_decl_line
  DW_AT_low_pc

2. Labels in an inlined function:

DW_TAG_label
  DW_AT_abstract_origin
  DW_AT_low_pc

We will collect label information from DBG_LABEL. Before every DBG_LABEL,
we will generate a temporary symbol to denote the location of the label.
The symbol could be used to get DW_AT_low_pc afterwards. So, we create a
mapping between 'inlined label' and DBG_LABEL MachineInstr in DebugHandlerBase.
The DBG_LABEL in the mapping is used to query the symbol before it.

The AbstractLabels in DwarfCompileUnit is used to process labels in inlined
functions.

We also keep a mapping between scope and labels in DwarfFile to help to
generate correct tree structure of DIEs.

Differential Revision: https://reviews.llvm.org/D45556

Patch by Hsiangkai Wang.

llvm-svn: 337799
2018-07-24 02:22:55 +00:00
Andres Freund 376a3d3659 Add PerfJITEventListener for perf profiling support.
This new JIT event listener supports generating profiling data for
the linux 'perf' profiling tool, allowing it to generate function and
instruction level profiles.

Currently this functionality is not enabled by default, but must be
enabled with LLVM_USE_PERF=yes.  Given that the listener has no
dependencies, it might be sensible to enable by default once the
initial issues have been shaken out.

I followed existing precedent in registering the listener by default
in lli. Should there be a decision to enable this by default on linux,
that should probably be changed.

Please note that until https://reviews.llvm.org/D47343 is resolved,
using this functionality with mcjit rather than orcjit will not
reliably work.

Disregarding the previous comment, here's an example:

$ cat /tmp/expensive_loop.c

bool stupid_isprime(uint64_t num)
{
        if (num == 2)
                return true;
        if (num < 1 || num % 2 == 0)
                return false;
        for(uint64_t i = 3; i < num / 2; i+= 2) {
                if (num % i == 0)
                        return false;
        }
        return true;
}

int main(int argc, char **argv)
{
        int numprimes = 0;

        for (uint64_t num = argc; num < 100000; num++)
        {
                if (stupid_isprime(num))
                        numprimes++;
        }

        return numprimes;
}

$ clang -ggdb -S -c -emit-llvm /tmp/expensive_loop.c -o
/tmp/expensive_loop.ll

$ perf record -o perf.data -g -k 1 ./bin/lli -jit-kind=mcjit /tmp/expensive_loop.ll 1

$ perf inject --jit -i perf.data -o perf.jit.data

$ perf report -i perf.jit.data
-   92.59%  lli      jitted-5881-2.so                   [.] stupid_isprime
     stupid_isprime
     main
     llvm::MCJIT::runFunction
     llvm::ExecutionEngine::runFunctionAsMain
     main
     __libc_start_main
     0x4bf6258d4c544155
+    0.85%  lli      ld-2.27.so                         [.] do_lookup_x

And line-level annotations also work:
       │              for(uint64_t i = 3; i < num / 2; i+= 2) {
       │1 30:   movq   $0x3,-0x18(%rbp)
  0.03 │1 38:   mov    -0x18(%rbp),%rax
  0.03 │        mov    -0x10(%rbp),%rcx
       │        shr    $0x1,%rcx
  3.63 │     ┌──cmp    %rcx,%rax
       │     ├──jae    6f
       │     │                if (num % i == 0)
  0.03 │     │  mov    -0x10(%rbp),%rax
       │     │  xor    %edx,%edx
 89.00 │     │  divq   -0x18(%rbp)
       │     │  cmp    $0x0,%rdx
  0.22 │     │↓ jne    5f
       │     │                        return false;
       │     │  movb   $0x0,-0x1(%rbp)
       │     │↓ jmp    73
       │     │        }
  3.22 │1 5f:│↓ jmp    61
       │     │        for(uint64_t i = 3; i < num / 2; i+= 2) {

Subscribers: mgorny, llvm-commits

Differential Revision: https://reviews.llvm.org/D44892

llvm-svn: 337789
2018-07-24 00:54:06 +00:00
Wolfgang Pieb 439801ba1d [DWARF v5] Refactor range lists dumping by using a more generic way of handling tables of lists.
The intent is to use it for location list tables as well. Change is almost NFC with the exception
of the spelling of some strings used during dumping (all lowercase now).

Reviewer: JDevlieghere

Differential Revision: https://reviews.llvm.org/D49500

llvm-svn: 337763
2018-07-23 22:37:17 +00:00
Martin Storsjo db42d51ee3 [MC] Add a separate flag for skipping comdat constant sections for MinGW. NFC.
This actually has nothing to do with the associative comdat sections
that aren't supported by GNU binutils ld.

Clarify the comments from SVN r335918 and use a separate flag for it.

Differential Revision: https://reviews.llvm.org/D49645

llvm-svn: 337757
2018-07-23 22:15:25 +00:00
George Burgess IV b00fb46479 [DebugCounters] Keep track of total counts
This patch makes debug counters keep track of the total number of times
we've called `shouldExecute` for each counter, so it's easier to build
automated tooling on top of these.

A patch to print these counts is coming soon.

Patch by Zhizhou Yang!

Differential Revision: https://reviews.llvm.org/D49560

llvm-svn: 337748
2018-07-23 21:49:36 +00:00
Sam McCall 4bb7883d09 [Support] Add a UniqueStringSaver: like StringSaver, but deduplicating.
Summary: Clarify contract of StringSaver (it null-terminates, callers rely on it).

Reviewers: hokein

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D49596

llvm-svn: 337677
2018-07-23 10:44:40 +00:00
Xin Tong 023e25ad14 [ORE] Move loop invariant ORE checks outside the PM loop.
Summary:
This takes 22ms out of ~20s compiling sqlite3.c because we call it
for every unit of compilation and every pass.

Reviewers: paquette, anemet

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D49586

llvm-svn: 337654
2018-07-22 05:27:41 +00:00
Chen Zheng 69bb064539 [InstrSimplify] fold sdiv if two operands are negated and non-overflow
Differential Revision: https://reviews.llvm.org/D49382

llvm-svn: 337642
2018-07-21 12:27:54 +00:00
Aaron Smith 23c2d1c15a [DebugInfo] Add a new DI flag to record if a C++ record is a trivial type
Summary:
This flag is used when emitting debug info and is needed to initialize subprogram and member function attributes (function options) for Codeview. These function options are used to create an accurate compiler type for UDT symbols (class/struct/union) from PDBs.

It is not easy to determine if a C++ record is trivial or not based on the current DICompositeType flags and other accessible debug information from Codeview. For example, without this flag the metadata for a non-trivial C++ record with user-defined ctor and a trivial one with a defaulted ctor are the same.

    struct S { S(); }
    struct S { S() = default; }

This change introduces a new DI flag and corresponding clang::CXXRecordDecl::isTrivial method to set the flag in the frontend.

Reviewers: rnk, zturner, llvm-commits, dblaikie, aleksandr.urakov, deadalnix

Reviewed By: rnk

Subscribers: asmith, probinson, aprantl, JDevlieghere

Differential Revision: https://reviews.llvm.org/D45122

llvm-svn: 337641
2018-07-21 05:42:13 +00:00
Lang Hames 960246dbee [ORC] Re-apply r336760 with fixes.
llvm-svn: 337637
2018-07-21 00:12:05 +00:00
Lang Hames a48d108353 Re-apply r337595 with fix for LLVM_ENABLE_THREADS=Off.
llvm-svn: 337626
2018-07-20 22:22:19 +00:00
Martin Storsjo a6ffc9c8df [COFF] Adjust how we flag weak externals
This fixes PR36096.

Originally based on a patch by Martell Malone.

Differential Revision: https://reviews.llvm.org/D44357

llvm-svn: 337613
2018-07-20 20:48:29 +00:00
Reid Kleckner 7ed83591c2 Revert r337595 "[ORC] Add new symbol lookup methods to ExecutionSessionBase in preparation for"
Breaks the build with LLVM_ENABLE_THREADS=OFF.

llvm-svn: 337608
2018-07-20 20:20:45 +00:00
Lang Hames 732d116a96 [ORC] Add new symbol lookup methods to ExecutionSessionBase in preparation for
deprecating SymbolResolver and AsynchronousSymbolQuery.

Both lookup overloads take a VSO search order to perform the lookup. The first
overload is non-blocking and takes OnResolved and OnReady callbacks. The second
is blocking, takes a boolean flag to indicate whether to wait until all symbols
are ready, and returns a SymbolMap. Both overloads take a RegisterDependencies
function to register symbol dependencies (if any) on the query.

llvm-svn: 337595
2018-07-20 18:31:53 +00:00
Lang Hames d4df0f1733 [ORC] Simplify VSO::lookupFlags to return the flags map.
This discards the unresolved symbols set and returns the flags map directly
(rather than mutating it via the first argument).

The unresolved symbols result made it easy to chain lookupFlags calls, but such
chaining should be rare to non-existant (especially now that symbol resolvers
are being deprecated) so the simpler method signature is preferable.

llvm-svn: 337594
2018-07-20 18:31:52 +00:00
Lang Hames fd0c1e7169 [ORC] Replace SymbolResolvers in the new ORC layers with search orders on VSOs.
A search order is a list of VSOs to be searched linearly to find symbols. Each
VSO now has a search order that will be used when fixing up definitions in that
VSO. Each VSO's search order defaults to just that VSO itself.

This is a first step towards removing symbol resolvers from ORC altogether. In
practice symbol resolvers tended to be used to implement a search order anyway,
sometimes with additional programatic generation of symbols. Now that VSOs
support programmatic generation of definitions via fallback generators, search
orders provide a cleaner way to achieve the desired effect (while removing a lot
of boilerplate).

llvm-svn: 337593
2018-07-20 18:31:50 +00:00
Zachary Turner a219bae1b7 Fix linker failure with Any.
This is due to a difference in MS ABI which is why I didn't see
it locally.  The included fix should work on all compilers.

llvm-svn: 337588
2018-07-20 17:50:53 +00:00
Zachary Turner f435a7eada Add a Microsoft Demangler.
This adds initial support for a demangling library (LLVMDemangle)
and tool (llvm-undname) for demangling Microsoft names.  This
doesn't cover 100% of cases and there are some known limitations
which I intend to address in followup patches, at least until such
time that we have (near) 100% test coverage matching up with all
of the test cases in clang/test/CodeGenCXX/mangle-ms-*.

Differential Revision: https://reviews.llvm.org/D49552

llvm-svn: 337584
2018-07-20 17:27:48 +00:00
Philip Pfaffe 2628620b62 [Any] Fix a typo: didn't use the correct argument
llvm-svn: 337583
2018-07-20 17:24:11 +00:00
Alina Sbirlea 20c2962585 [MemorySSA] Add API to update MemoryPhis, following CFG changes.
Summary:
When splitting predecessors in BasicBlockUtils, we create a new block as an immediate predecessor of the original BB, then we connect a given set of predecessors to the new block.
The API in this patch will be used to update MemoryPhis for this CFG change.
If all predecessors are being moved, we move the MemoryPhi directly. Otherwise we create a new MemoryPhi in the NewBB and populate its incoming values, while deleting them from BB's Phi.
[Split from D45299 for easier review]

Reviewers: george.burgess.iv

Subscribers: sanjoy, jlebar, Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D49156

llvm-svn: 337581
2018-07-20 17:13:05 +00:00
Zachary Turner 862439c7cb Change bool_constant to integral_constant.
bool_constant is C++17.

llvm-svn: 337576
2018-07-20 16:51:55 +00:00
Zachary Turner bf443b9181 Add llvm::Any.
This is analogous to std::any which is only available in C++17.

Differential Revision: https://reviews.llvm.org/D48807

llvm-svn: 337573
2018-07-20 16:39:32 +00:00
Pavel Labath 7f7e60694e DwarfDebug: Reduce duplication in addAccel*** methods
Summary:
Each of the four methods had a dozen lines and was doing almost exactly
the same thing: get the appropriate accelerator table kind and insert an
entry into it. I move this common logic to a helper function and make
these methods delegate to it.

This came up in the context of D49493, where I've needed to make adding
a string to a string pool slightly more complicated, and it seemed to
make sense to do it in one place instead of five.

To make this work I've needed to unify the interface of the AccelTable
data types, as some used to store DIE& and others DIE*. I chose to unify
to a reference as that's what the caller uses.

This technically isn't NFC, because it changes the StringPool used for
apple tables in the DWO case (now it uses the main file like DWARF v5
instead of the DWO file). However, that shouldn't matter, as DWO is not
a thing on apple targets (clang frontend simply ignores -gsplit-dwarf).

Reviewers: JDevlieghere, aprantl, probinson

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D49542

llvm-svn: 337562
2018-07-20 15:24:13 +00:00
Duncan P. N. Exon Smith c03b04d533 Reapply "ADT: Shrink size of SmallVector by 8B on 64-bit platforms"
I'm optimistically reverting commit r337511, effectively reapplying
r337504 *without* changes.

The failing bots that had `SmallVector` in the backtrace recovered after
the unrelated commit r337508.  The backtraces looked bogus anyway, with
`SmallVector::size()` calling (e.g.) `ConstantArray::get()`.

Here's the original commit message:

    ADT: Shrink size of SmallVector by 8B on 64-bit platforms

    Represent size and capacity directly as unsigned and calculate
    `end()` using `begin() + size()`.

    This limits the maximum size/capacity of a vector to UINT32_MAX.

    https://reviews.llvm.org/D48518

llvm-svn: 337514
2018-07-20 00:44:58 +00:00
Duncan P. N. Exon Smith 42f20f3c55 Revert "ADT: Shrink size of SmallVector by 8B on 64-bit platforms"
This reverts commit r337504 while I investigate a TSan bot failure that
seems related:

http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-autoconf/builds/26526

    #8 0x000055581f2895d8 (/b/sanitizer-x86_64-linux-autoconf/build/tsan_debug_build/bin/clang-7+0x1eb45d8)
    #9 0x000055581f294323 llvm::ConstantAggrKeyType<llvm::ConstantArray>::create(llvm::ArrayType*) const /b/sanitizer-x86_64-linux-autoconf/build/llvm/lib/IR/ConstantsContext.h:409:0
    #10 0x000055581f294323 llvm::ConstantUniqueMap<llvm::ConstantArray>::create(llvm::ArrayType*, llvm::ConstantAggrKeyType<llvm::ConstantArray>, std::pair<unsigned int, std::pair<llvm::ArrayType*, llvm::ConstantAggrKeyType<llvm::ConstantArray> > >&) /b/sanitizer-x86_64-linux-autoconf/build/llvm/lib/IR/ConstantsContext.h:635:0
    #11 0x000055581f294323 llvm::ConstantUniqueMap<llvm::ConstantArray>::getOrCreate(llvm::ArrayType*, llvm::ConstantAggrKeyType<llvm::ConstantArray>) /b/sanitizer-x86_64-linux-autoconf/build/llvm/lib/IR/ConstantsContext.h:654:0
    #12 0x000055581f2944cb llvm::ConstantArray::get(llvm::ArrayType*, llvm::ArrayRef<llvm::Constant*>) /b/sanitizer-x86_64-linux-autoconf/build/llvm/lib/IR/Constants.cpp:964:0
    #13 0x000055581fa27e19 llvm::SmallVectorBase::size() const /b/sanitizer-x86_64-linux-autoconf/build/llvm/include/llvm/ADT/SmallVector.h:53:0
    #14 0x000055581fa27e19 llvm::SmallVectorImpl<llvm::Constant*>::resize(unsigned long) /b/sanitizer-x86_64-linux-autoconf/build/llvm/include/llvm/ADT/SmallVector.h:347:0
    #15 0x000055581fa27e19 (anonymous namespace)::EmitArrayConstant(clang::CodeGen::CodeGenModule&, clang::ConstantArrayType const*, llvm::Type*, unsigned int, llvm::SmallVectorImpl<llvm::Constant*>&, llvm::Constant*) /b/sanitizer-x86_64-linux-autoconf/build/llvm/tools/clang/lib/CodeGen/CGExprConstant.cpp:669:0

llvm-svn: 337511
2018-07-20 00:09:56 +00:00
Duncan P. N. Exon Smith 6acb8cee05 ADT: Shrink size of SmallVector by 8B on 64-bit platforms
Representing size and capacity directly as unsigned and calculate
`end()` using `begin() + size()`.

This limits the maximum size/capacity of a vector to UINT32_MAX.

https://reviews.llvm.org/D48518

llvm-svn: 337504
2018-07-19 22:29:47 +00:00
Reid Kleckner 056904599b Work around bug in mingw-w64 GCC 8.1.0
This particular version of GCC seems to break bitfields when a method
appears between two bitfield members.

Personally, I think it's nice to keep bitfields close together so that
it's easy to check how things are packed, so I moved the method after
SubClassData.

Fixes PR38168.

llvm-svn: 337495
2018-07-19 20:32:45 +00:00
Teresa Johnson 28023dbed7 [ThinLTO] Enable ThinLTO WholeProgramDevirt and LowerTypeTests in new PM
Summary:
Enable these passes for CFI and WPD in ThinLTO and LTO with the new pass
manager. Add a couple of tests for both PMs based on the clang tests
tools/clang/test/CodeGen/thinlto-distributed-cfi*.ll, but just test
through llvm-lto2 and not with distributed ThinLTO.

Reviewers: pcc

Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, llvm-commits

Differential Revision: https://reviews.llvm.org/D49429

llvm-svn: 337461
2018-07-19 14:51:32 +00:00
Serge Guelton 7ca1269cd5 Use std::reference_wrapper instead of llvm::ReferenceStorage
Reviewed By: Bigcheese
Differential Revision: https://reviews.llvm.org/D49298

llvm-svn: 337444
2018-07-19 09:24:34 +00:00
Andrea Di Biagio cd8f627c37 [TargetInstPredicate] Add definition of CheckInvalidRegisterOperand.
This should have been part of r337378. I forgot to svn add it before committing
the change.

llvm-svn: 337380
2018-07-18 11:16:31 +00:00
Simon Pilgrim e2c615dca1 Fix -Wdocumentation warning. NFCI.
llvm-svn: 337368
2018-07-18 09:10:18 +00:00
Peter Collingbourne bd9d313d5c CodeGen: Add a target option for emitting .addrsig directives for all address-significant symbols.
Differential Revision: https://reviews.llvm.org/D48143

llvm-svn: 337331
2018-07-17 22:40:08 +00:00
Peter Collingbourne 3e22733698 MC: Implement support for new .addrsig and .addrsig_sym directives.
Part of the address-significance tables proposal:
http://lists.llvm.org/pipermail/llvm-dev/2018-May/123514.html

Differential Revision: https://reviews.llvm.org/D47744

llvm-svn: 337328
2018-07-17 22:17:18 +00:00
Zachary Turner 8a0efd0919 Add some helper functions to the demangle utility classes.
These are all methods that, while not currently used in the
Itanium demangler, are generally useful enough that it's
likely the itanium demangler could find a use for them.  More
importantly, they are all necessary for the Microsoft demangler
which is up and coming in a subsequent patch.  Rather than
combine these into a single monolithic patch, I think it makes
sense to commit this utility code first since it is very simple,
this way it won't detract from the substance of the MS demangler
patch.

llvm-svn: 337316
2018-07-17 19:42:29 +00:00
Florian Hahn 4a53bc63c2 Revert rL337292 due to another MSVC STL problem.
llvm-svn: 337303
2018-07-17 17:12:50 +00:00
Florian Hahn c491c2b955 Recommit r334887: [SmallSet] Add SmallSetIterator.
Spell out destructor, copy/move constructor and assignment operators for
MSVC STL, where set<T>::const_iterator is not trivially copy constructible.

llvm-svn: 337292
2018-07-17 15:24:19 +00:00
whitequark 7c4a074505 [LLVM-C] Add target triple normalization to the C API.
rL333307 was introduced to remove automatic target triple
normalization when calling sys::getDefaultTargetTriple(), arguing
that users of the latter already called Triple::normalize()
if necessary. However, users of the C API currently have no way of
doing target triple normalization.

This patch introduces an LLVMNormalizeTargetTriple function to
the C API which wraps Triple::normalize() and can be used on
the result of LLVMGetDefaultTargetTriple to achieve the same effect.

Differential Revision: https://reviews.llvm.org/D49414

Reviewed By: whitequark

llvm-svn: 337263
2018-07-17 10:57:39 +00:00
Sam Clegg cf2a9e28b1 [WebAssembly] Remove ELF file support.
This support was partial and temporary.  Now that we have
wasm object file support its no longer needed.

Differential Revision: https://reviews.llvm.org/D48744

llvm-svn: 337222
2018-07-16 23:09:29 +00:00
Sanjay Patel c71adc8040 [Intrinsics] define funnel shift IR intrinsics + DAG builder support
As discussed here:
http://lists.llvm.org/pipermail/llvm-dev/2018-May/123292.html
http://lists.llvm.org/pipermail/llvm-dev/2018-July/124400.html

We want to add rotate intrinsics because the IR expansion of that pattern is 4+ instructions, 
and we can lose pieces of the pattern before it gets to the backend. Generalizing the operation 
by allowing 2 different input values (plus the 3rd shift/rotate amount) gives us a "funnel shift" 
operation which may also be a single hardware instruction.

Initially, I thought we needed to define new DAG nodes for these ops, and I spent time working 
on that (much larger patch), but then I concluded that we don't need it. At least as a first 
step, we have all of the backend support necessary to match these ops...because it was required. 
And shepherding these through the IR optimizer is the primary concern, so the IR intrinsics are 
likely all that we'll ever need.

There was also a question about converting the intrinsics to the existing ROTL/ROTR DAG nodes
(along with improving the oversized shift documentation). Again, I don't think that's strictly 
necessary (as the test results here prove). That can be an efficiency improvement as a small 
follow-up patch.

So all we're left with is documentation, definition of the IR intrinsics, and DAG builder support. 

Differential Revision: https://reviews.llvm.org/D49242

llvm-svn: 337221
2018-07-16 22:59:31 +00:00
Fangrui Song cb0bab86b3 [CodeGen] Fix inconsistent declaration parameter name
llvm-svn: 337200
2018-07-16 18:51:40 +00:00
Teresa Johnson d68935c5ac Restore "[ThinLTO] Ensure we always select the same function copy to import"
This reverts commit r337081, therefore restoring r337050 (and fix in
r337059), with test fix for bot failure described after the original
description below.

In order to always import the same copy of a linkonce function,
even when encountering it with different thresholds (a higher one then a
lower one), keep track of the summary we decided to import.
This ensures that the backend only gets a single definition to import
for each GUID, so that it doesn't need to choose one.

Move the largest threshold the GUID was considered for import into the
current module out of the ImportMap (which is part of a larger map
maintained across the whole index), and into a new map just maintained
for the current module we are computing imports for. This saves some
memory since we no longer have the thresholds maintained across the
whole index (and throughout the in-process backends when doing a normal
non-distributed ThinLTO build), at the cost of some additional
information being maintained for each invocation of ComputeImportForModule
(the selected summary pointer for each import).

There is an additional map lookup for each callee being considered for
importing, however, this was able to subsume a map lookup in the
Worklist iteration that invokes computeImportForFunction. We also are
able to avoid calling selectCallee if we already failed to import at the
same or higher threshold.

I compared the run time and peak memory for the SPEC2006 471.omnetpp
benchmark (running in-process ThinLTO backends), as well as for a large
internal benchmark with a distributed ThinLTO build (so just looking at
the thin link time/memory). Across a number of runs with and without
this change there was no significant change in the time and memory.

(I tried a few other variations of the change but they also didn't
improve time or peak memory).

The new commit removes a test that no longer makes sense
(Transforms/FunctionImport/hotness_based_import2.ll), as exposed by the
reverse-iteration bot. The test depends on the order of processing the
summary call edges, and actually depended on the old problematic
behavior of selecting more than one summary for a given GUID when
encountered with different thresholds. There was no guarantee even
before that we would eventually pick the linkonce copy with the hottest
call edges, it just happened to work with the test and the old code, and
there was no guarantee that we would end up importing the selected
version of the copy that had the hottest call edges (since the backend
would effectively import only one of the selected copies).

Reviewers: davidxl

Subscribers: mehdi_amini, inglorion, llvm-commits

Differential Revision: https://reviews.llvm.org/D48670

llvm-svn: 337184
2018-07-16 15:30:27 +00:00
Roman Lebedev de506632aa [X86][AArch64][DAGCombine] Unfold 'check for [no] signed truncation' pattern
Summary:

[[ https://bugs.llvm.org/show_bug.cgi?id=38149 | PR38149 ]]

As discussed in https://reviews.llvm.org/D49179#1158957 and later,
the IR for 'check for [no] signed truncation' pattern can be improved:
https://rise4fun.com/Alive/gBf
^ that pattern will be produced by Implicit Integer Truncation sanitizer,
https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530
in signed case, therefore it is probably a good idea to improve it.

But the IR-optimal patter does not lower efficiently, so we want to undo it..

This handles the simple pattern.
There is a second pattern with predicate and constants inverted.

NOTE: we do not check uses here. we always do the transform.

Reviewers: spatel, craig.topper, RKSimon, javed.absar

Reviewed By: spatel

Subscribers: kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D49266

llvm-svn: 337166
2018-07-16 12:44:10 +00:00
Jonas Devlieghere 98062cb396 [AccelTable] Provide DWARF5AccelTableStaticData for dsymutil.
For dsymutil we want to store offsets in the accelerator table entries
rather than DIE pointers. In addition, we need a way to communicate
which CU a DIE belongs to. This patch provides support for both of these
issues.

Differential revision: https://reviews.llvm.org/D49102

llvm-svn: 337158
2018-07-16 10:52:27 +00:00
Alexandros Lamprineas f854ce84c4 [MemorySSAUpdater] Remove deleted trivial Phis from active workset
Bug fix for PR37808. The regression test is a reduced version of the
original reproducer attached to the bug report. As stated in the report,
the problem was that InsertedPHIs was keeping dangling pointers to
deleted Memory-Phis. MemoryPhis are created eagerly and sometimes get
zapped shortly afterwards. I've used WeakVH instead of an expensive
removal operation from the active workset.

Differential Revision: https://reviews.llvm.org/D48372

llvm-svn: 337149
2018-07-16 07:51:27 +00:00
Michael J. Spencer 7bb2767fba Recommit r335794 "Add support for generating a call graph profile from Branch Frequency Info." with fix for removed functions.
llvm-svn: 337140
2018-07-16 00:28:24 +00:00
Andrea Di Biagio ff630c2cdc [llvm-mca][BtVer2] teach how to identify false dependencies on partially written
registers.

The goal of this patch is to improve the throughput analysis in llvm-mca for the
case where instructions perform partial register writes.

On x86, partial register writes are quite difficult to model, mainly because
different processors tend to implement different register merging schemes in
hardware.

When the code contains partial register writes, the IPC (instructions per
cycles) estimated by llvm-mca tends to diverge quite significantly from the
observed IPC (using perf).

Modern AMD processors (at least, from Bulldozer onwards) don't rename partial
registers. Quoting Agner Fog's microarchitecture.pdf:
" The processor always keeps the different parts of an integer register together.
For example, AL and AH are not treated as independent by the out-of-order
execution mechanism. An instruction that writes to part of a register will
therefore have a false dependence on any previous write to the same register or
any part of it."

This patch is a first important step towards improving the analysis of partial
register updates. It changes the semantic of RegisterFile descriptors in
tablegen, and teaches llvm-mca how to identify false dependences in the presence
of partial register writes (for more details: see the new code comments in
include/Target/TargetSchedule.h - class RegisterFile).

This patch doesn't address the case where a write to a part of a register is
followed by a read from the whole register.  On Intel chips, high8 registers
(AH/BH/CH/DH)) can be stored in separate physical registers. However, a later
(dirty) read of the full register (example: AX/EAX) triggers a merge uOp, which
adds extra latency (and potentially affects the pipe usage).
This is a very interesting article on the subject with a very informative answer
from Peter Cordes:
https://stackoverflow.com/questions/45660139/how-exactly-do-partial-registers-on-haswell-skylake-perform-writing-al-seems-to

In future, the definition of RegisterFile can be extended with extra information
that may be used to identify delays caused by merge opcodes triggered by a dirty
read of a partial write.

Differential Revision: https://reviews.llvm.org/D49196

llvm-svn: 337123
2018-07-15 11:01:38 +00:00
Teresa Johnson b78c5d0602 Revert "[ThinLTO] Ensure we always select the same function copy to import"
This reverts commits r337050 and r337059. Caused failure in
reverse-iteration bot that needs more investigation.

llvm-svn: 337081
2018-07-14 01:45:49 +00:00
Vedant Kumar d37e078d39 Fix comments which mixed up 'before' and 'after', NFC
llvm-svn: 337061
2018-07-13 22:39:31 +00:00
Vedant Kumar 47f2844c76 Clarify wording of a doxygen comment, NFC
llvm-svn: 337060
2018-07-13 22:39:29 +00:00