For 64bit ABIs it is common practice to use relative Jump Tables with
potentially different relocation bases. As the logic for the jump table
itself doesn't depend on the relocation base, make it easier for targets
to use the generic logic. Start by dropping the now redundant MIPS logic.
Differential Revision: https://reviews.llvm.org/D26578
llvm-svn: 286951
This patch adds the Sched Machine Model for Cortex-R52.
Details of the pipeline and descriptions are in comments
in file ARMScheduleR52.td included in this patch.
Reviewers: rengolin, jmolloy
Differential Revision: https://reviews.llvm.org/D26500
llvm-svn: 286949
Summary:
This change is preparation for a change that will allow targets to verify that the instructions
they emit meet the predicates they specify. This is useful to ensure that C++
legalization/lowering/instruction-selection doesn't incorrectly select code for a different
subtarget than intended. Such cases are not caught by the integrated assembler when emitting
instructions directly to an object file.
Reviewers: qcolombet
Subscribers: qcolombet, beanz, mgorny, llvm-commits, modocache
Differential Revision: https://reviews.llvm.org/D25614
llvm-svn: 286945
This results in a speed-up of over 6x on sqlite3.
Before:
$ time -p /org/llvm/utils/opt-viewer/opt-viewer.py ./MultiSource/Applications/sqlite3/CMakeFiles/sqlite3.dir/sqlite3.c.opt.yaml html
real 415.07
user 410.00
sys 4.66
After with libYAML:
$ time -p /org/llvm/utils/opt-viewer/opt-viewer.py ./MultiSource/Applications/sqlite3/CMakeFiles/sqlite3.dir/sqlite3.c.opt.yaml html
real 63.96
user 60.03
sys 3.67
I followed these steps to get libYAML working with PyYAML: http://rmcgibbo.github.io/blog/2013/05/23/faster-yaml-parsing-with-libyaml/
llvm-svn: 286942
Summary:
Add basic functionality to support call lowering for X86.
Currently only supports functions which return void and take zero arguments.
Inspired by commit 286573.
Reviewers: ab, qcolombet, t.p.northover
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26593
llvm-svn: 286935
One day we'd like to remove some of this autoupgrade support and it will be easier if we know how long some of it has been around.
Differential Revision: https://reviews.llvm.org/D26321
llvm-svn: 286933
This patch gets a DWARF parsing speed improvement by having DWARFAbbreviationDeclaration instances know if they have a fixed byte size. If an abbreviation has a fixed byte size that can be calculated given a DWARFUnit, then parsing a DIE becomes two steps: parse ULEB128 abbrev code, and then add constant size to the offset.
This patch also adds a fixed byte size to each DWARFAbbreviationDeclaration::AttributeSpec so that attributes can quickly skip their values if needed without the need to lookup the fixed for size.
Notable improvements:
- DWARFAbbreviationDeclaration::findAttributeIndex() now returns an Optional<uint32_t> instead of a uint32_t and we no longer have to look for the magic -1U return value
- Optional<uint32_t> DWARFAbbreviationDeclaration::findAttributeIndex(dwarf::Attribute attr) const;
- DWARFAbbreviationDeclaration now has a getAttributeValue() function that extracts an attribute value given a DIE offset that takes advantage of the DWARFAbbreviationDeclaration::AttributeSpec::ByteSize
- bool DWARFAbbreviationDeclaration::getAttributeValue(const uint32_t DIEOffset, const dwarf::Attribute Attr, const DWARFUnit &U, DWARFFormValue &FormValue) const;
- A DWARFAbbreviationDeclaration instance can return a fixed byte size for itself so DWARF parsing is faster:
- Optional<size_t> DWARFAbbreviationDeclaration::getFixedAttributesByteSize(const DWARFUnit &U) const;
- Any functions that used to take a "const DWARFUnit *U" that would crash if U was NULL now take a "const DWARFUnit &U" and are only called with a valid DWARFUnit
Differential Revision: https://reviews.llvm.org/D26567
llvm-svn: 286924
This patch makes it possible to identify object files created by CL.exe
with /GL option. Such file contains Microsoft proprietary intermediate
code instead of target machine code to do LTO.
I need this to print out user-friendly error message from LLD.
Differential Revision: https://reviews.llvm.org/D26645
llvm-svn: 286919
This broke s390x due to a bug in the QueueChannel implementation that led to it
infinite-looping. Disabling it while I look into a fix.
llvm-svn: 286917
Permit specifying the match length (the `-n` or `--bytes` option). The
deprecated `-[length]` form is not supported as an option. This allows the
strings tool to display only the specified length strings rather than the
hardcoded default length of >= 4.
llvm-svn: 286914
Summary:
UBSAN complains that this is undefined behavior.
We can assume that empty substring (N==1) always satisfy conditions. So
std::memcmp will be called only only for N > 1 and Str.size() > 0.
Reviewers: ruiu, zturner
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26646
llvm-svn: 286910
Implement the Newton series for square root, its reciprocal and reciprocal
natively using the specialized instructions in AArch64 to perform each
series iteration.
Differential revision: https://reviews.llvm.org/D26518
llvm-svn: 286907
This was causing us to create duplicate metadata on global variables.
Debug info test case by Adrian Prantl, additional test cases by me.
Fixes PR31012.
Differential Revision: https://reviews.llvm.org/D26622
llvm-svn: 286905
Summary:
It's undefined according UBSAN.
Not sure which CL caused test failures, but seems writeBytes for empty buffer
should be OK.
Reviewers: rnk, zturner
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26638
llvm-svn: 286896
This adds support for TSan C++ exception handling, where we need to add extra calls to __tsan_func_exit when a function is exitted via exception mechanisms. Otherwise the shadow stack gets corrupted (leaked). This patch moves and enhances the existing implementation of EscapeEnumerator that finds all possible function exit points, and adds extra EH cleanup blocks where needed.
Differential Revision: https://reviews.llvm.org/D26177
llvm-svn: 286893
The philosophy of the error checking in libObject for Mach-O files
is that the constructor will check the load commands so for their
tables the offsets and sizes are properly contained in the file.
But there is no checking of the entries of any of the tables.
For the contents of the tables themselves the methods accessing
the contents of the entries return errors as needed. In some
cases this however makes it difficult or cumbersome to produce
a good error message which would include the tool name, file name,
archive member, and name of the architecture of a slice of a universal file
the error occurred in.
So idea is that there will be a method to check a table which can
be called up front before using it allowing a good error message
to be produced before a table is used. And if only verification of
the Mach-O file and its tables are wanted a new possible method
checkAllTables() could be added to call all of the methods to
check all the tables at some time when such methods exist.
The checkSymbolTable() is the first of such methods to check
one of the Mach-O file tables. This method initially will used in
llvm-objdump’s DisassembleMachO() routine before it gets the
section and symbol information. As if there are problems with
the symbol table currently the error is first encountered by the
bool operator() in the SymbolSorter() struct which passed to
std::sort(). In this case there is no context as to the file name
the symbol which results a poor error message:
LLVM ERROR: truncated or malformed object (bad string index: 22 for symbol at index 1)
with the added call to the checkSymbolTable() method the
error message includes the tool name and file name:
llvm-objdump: 'macho-invalid-symbol-strx': truncated or malformed object (bad string table index: 22 past the end of string table, for symbol at index 1)
llvm-svn: 286887
For example we were producing
push {r8, r10, r11, r4, r5, r7, lr}
This is misleading (r4, r5 and r7 are actually pushed before the rest), and
other components (stack folding recently) often forget to deal with the extra
complexity coming from the different order, leading to miscompiles. Finally, we
warn about our own code in -no-integrated-as mode without this, which is really
not a good idea.
Fixed usage of std::sort so that we (hopefully) use instantiations that
actually exist in GCC 4.8.
llvm-svn: 286881
Summary:
Replace a splat of zeros to a vector store by scalar stores of WZR/XZR.
The load store optimizer pass will merge them to store pair stores.
This should be better than a movi to create the vector zero followed by
a vector store if the zero constant is not re-used, since one
instructions and one register live range will be removed.
For example, the final generated code should be:
stp xzr, xzr, [x0]
instead of:
movi v0.2d, #0
str q0, [x0]
Reviewers: t.p.northover, mcrosier, MatzeB, jmolloy
Subscribers: aemerson, rengolin, llvm-commits
Differential Revision: https://reviews.llvm.org/D26561
llvm-svn: 286875
Summary:
We have always speculatively promoted all renamable local values
(except const non-address taken variables) for both the exporting
and importing module. We would then internalize them back based on
the ThinLink results if they weren't actually exported. This is
inefficient, and results in unnecessary renames. It also meant we
had to check the non-renamability of a value in the summary, which
was already checked during function importing analysis in the ThinLink.
Made renameModuleForThinLTO (which does the promotion/renaming) instead
use the index when exporting, to avoid unnecessary renames/promotions.
For importing modules, we can simply promoted all values as any local
we import by definition is exported and needs promotion.
This required changes to the method used by the FunctionImport pass
(only invoked from 'opt' for testing) and when invoked from llvm-link,
since neither does a ThinLink. We simply conservatively mark all locals
in the index as promoted, which preserves the current aggressive
promotion behavior.
I also needed to change an llvm-lto based test where we had previously
been aggressively promoting values that weren't importable (aliasees),
but now will not promote.
Reviewers: mehdi_amini
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26467
llvm-svn: 286871
For example we were producing
push {r8, r10, r11, r4, r5, r7, lr}
This is misleading (r4, r5 and r7 are actually pushed before the rest), and
other components (stack folding recently) often forget to deal with the extra
complexity coming from the different order, leading to miscompiles. Finally, we
warn about our own code in -no-integrated-as mode without this, which is really
not a good idea.
llvm-svn: 286866
add an intrinsic to expose the 'VSX Scalar Convert Half-Precision to
Single-Precision' instruction.
Differential review: https://reviews.llvm.org/D26536
llvm-svn: 286862
Summary:
Extend image intrinsics to support data types of V1F32 and V2F32.
TODO: we should define a mapping table to change the opcode for data type of V2F32 but just one channel is active,
even though such case should be very rare.
Reviewers:
tstellarAMD
Differential Revision:
http://reviews.llvm.org/D26472
llvm-svn: 286860
Darwin's backtrace() function does not work with sigaltstack (which was
enabled when available with r270395) — it does a sanity check to make
sure that the current frame pointer is within the expected stack area
(which it is not when using an alternate stack) and gives up otherwise.
The alternative of _Unwind_Backtrace seems to work fine on macOS, so use
that when backtrace() fails. Note that we then use backtrace_symbols_fd()
with the addresses from _Unwind_Backtrace, but I’ve tested that and it
also seems to work fine. rdar://problem/28646552
llvm-svn: 286851
This restores the rest of r286297 (part was restored in r286475).
Specifically, it restores the part requiring adding a dependency from
the Analysis to Object library (downstream use changed to correctly
model split BitReader vs BitWriter libraries).
Original description of this part of patch follows:
Module level asm may also contain defs of values. We need to prevent
export of any refs to local values defined in module level asm (e.g. a
ref in normal IR), since that also requires renaming/promotion of the
local. To do that, the summary index builder looks at all values in the
module level asm string that are not marked Weak or Global, which is
exactly the set of locals that are defined. A summary is created for
each of these local defs and flagged as NoRename.
This required adding handling to the BitcodeWriter to look at GV
declarations to see if they have a summary (rather than skipping them
all).
Finally, added an assert to IRObjectFile::CollectAsmUndefinedRefs to
ensure that an MCAsmParser is available, otherwise the module asm parse
would silently fail. Initialized the asm parser in the opt tool for use
in testing this fix.
Fixes PR30610.
llvm-svn: 286844
The Stack slot coloring pass removes a store that is followed by a load
that deal with the same stack slot. The function isLoadFromStackSlot
is supposed to consider the loads that have no side-effects. This
patch fixed the issue by removing the unsafe loads from this function
Eg:
%vreg0<def> = L2_loadruh_io <fi#15>, 0
S2_storeri_io <fi#15>, 0, %vreg0
In this case, we load an unsigned extended half word and store this in to
the same stack slot. The Stack slot coloring pass considers safe to remove
the store. This patch marked all the non-vector byte and half word loads as
unsafe.
llvm-svn: 286843
The existing logic was to discard any symbols representing function template
instantiations, as the definitions were assumed to be inline. But there are
three explicit specializations of clang::Type::getAs that are only defined in
Clang's lib/AST/Type.cpp, and at least the plugin used by the LibreOffice build
(https://wiki.documentfoundation.org/Development/Clang_plugins) uses those
functions.
Differential Revision: https://reviews.llvm.org/D26455
llvm-svn: 286841
Summary:
The change in r285513 to prevent exporting of locals used in
inline asm added all locals in the llvm.used set to the reference
set of functions containing inline asm. Since these locals were marked
NoRename, this automatically prevented importing of the function.
Unfortunately, this caused an explosion in the summary reference lists
in some cases. In my particular example, it happened for a large protocol
buffer generated C++ file, where many of the generated functions
contained an inline asm call. It was exacerbated when doing a ThinLTO
PGO instrumentation build, where the PGO instrumentation included
thousands of private __profd_* values that were added to llvm.used.
We really only need to include a single llvm.used local (NoRename) value
in the reference list of a function containing inline asm to block it
being imported. However, it seems cleaner to add a flag to the summary
that explicitly describes this situation, which is what this patch does.
Reviewers: mehdi_amini
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26402
llvm-svn: 286840
Add explicit v16i16/v32i8 ADD/SUB costs, matching the costs of v4i64/v8i32 - they were missing for some reason.
This has side effects on the LV max bandwidth tests (AVX1 now prefers 128-bit vectors vs AVX2 which still prefers 256-bit)
llvm-svn: 286832
Also,
Revert "test: remove the archive before modifying it"
Revert "test: explicitly use gnu format"
This reverts commits r286778, r286729 and r286767, as they are randomly failing
on many bots (AArch64, x86_64).
llvm-svn: 286820
When calculating the cost of a call instruction we were applying a heuristic penalty as well as the cost of the instruction itself.
However, when calculating the benefit from inlining we weren't discounting the equivalent penalty for the call instruction that would be removed! This caused skew in the calculation and meant we wouldn't inline in the following, trivial case:
int g() {
h();
}
int f() {
g();
}
llvm-svn: 286814
Summary:
Unfolding selects was previously done with the help of a vector
of pointers that was then sorted to be able to remove duplicates.
As this sorting depends on the memory addresses, it was
non-deterministic. A SetVector is used now so that duplicates are
removed without the need of sorting first.
Reviewers: mgrang, efriedma
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26450
llvm-svn: 286807
Summary:
This patch adds explicit `(void)` casts to discarded `release()` calls to suppress -Wunused-result.
This patch fixes *all* warnings are generated as a result of [applying `[[nodiscard]]` within libc++](https://reviews.llvm.org/D26596).
Similar fixes were applied to Clang in r286796.
Reviewers: chandlerc, dberris
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26598
llvm-svn: 286797
Only attempt to demangle symbols which have the itanium C++ prefix of `_Z`.
This ensures that we do not treat any symbol name as a managled named. We would
previously treat a C function `f` as a mangled name and decode that to `float`
incorrectly.
While it is easy to add tests for this, Mehdi recommended against introducing
tests for the demangler as libc++abi should cover the testing.
llvm-svn: 286795
-Don't print the 'x' suffix for the 128-bit reg/mem VEX encoded instructions in Intel syntax. This is consistent with the EVEX versions.
-Don't print the 'y' suffix for the 256-bit reg/reg VEX encoded instructions in Intel or AT&T syntax. This is consistent with the EVEX versions.
-Allow the 'x' and 'y' suffixes to be used for the reg/mem forms when we're assembling using Intel syntax.
-Allow the 'x' and 'y' suffixes on the reg/reg EVEX encoded instructions in Intel or AT&T syntax. This is consistent with what VEX was already allowing.
This should fix at least some of PR28850.
llvm-svn: 286787
`shl nsw i8 1, i8 8` is poison, but `mul i8 1, i8 128` is not.
This was discussed previously here:
http://lists.llvm.org/pipermail/llvm-dev/2015-April/084195.html. From
the discussion, it was not clear which semantics we want for `shl`, but
for now at least make the language reference more accurate.
llvm-svn: 286785
`c++filt` when given no arguments runs as a REPL, decoding each line as a
decorated name. Unify the test structure to be more uniform, with the tests for
llvm-cxxfilt living under test/tools/llvm-cxxfilt.
llvm-svn: 286777
nThis avoids the nasty problems caused by using
memory instructions that read the exec mask while
spilling / restoring registers used for control flow
masking, but only for VI when these were added.
This always uses the scalar stores when enabled currently,
but it may be better to still try to spill to a VGPR
and use this on the fallback memory path.
The cache also needs to be flushed before wave termination
if a scalar store is used.
llvm-svn: 286766
These will be used to replace the masked intrinsics so that InstCombineCalls can optimize the AVX-512 variable shifts the same way it does for AVX2.
llvm-svn: 286754
All existing callers were manually extracting information out of an existing
GEP instruction and passing it to getGEPExpr(). Simplify the interface by
changing it to take a GEPOperator instead.
llvm-svn: 286751
Until we have handling for ignoring unloaded sections, simplify the logic to
the point of triviality. This fixes the scanning of archives, particularly when
embedded in archives.
llvm-svn: 286727
After this I'll add the unmasked intrinsics to InstCombineCalls to finish making our handling of these types of shuffles consistent between AVX-512 and the legacy intrinsics.
llvm-svn: 286725
Clear cross-target test dependencies when using LLVM_OCAML_OUT_OF_TREE,
in order to make it possible to run check-llvm-bindings-ocaml without
rebuilding the whole LLVM.
Differential Revision: https://reviews.llvm.org/D26580
llvm-svn: 286720
Summary:
This is the first step towards being able to add the avx512 shift by immediate intrinsics to InstCombineCalls where we aleady support the sse2 and avx2 intrinsics. We need to the unmasked versions so we can avoid having to teach InstCombineCalls that it would need to insert selects sometimes. Instead we'll just add the selects around the new instrinsics in the frontend.
This change should also enable the shift by i32 intrinsics to take a non-constant shift value just like the avx2 and sse intrinsics. This will enable us to fix PR30691 once we update clang.
Next I'll switch clang to use the new builtins. Then we'll come back to the backend and remove/autoupgrade the old intrinsics. Then I'll work on the same series for variable shifts.
Reviewers: RKSimon, zvi, delena
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26333
llvm-svn: 286711
Summary: VALIGND and VALIGNQ are similar to PALIGNR but instead of working on a 128-bit lane they work on the entire vector register. This change leverages the shuffle rotate detection code used for PALIGNR to detect these cases.
Reviewers: delena, RKSimon
Subscribers: Farhana, llvm-commits
Differential Revision: https://reviews.llvm.org/D26297
llvm-svn: 286709
return types.
This class allows user provided handlers to return either error-wrapped types
or plain types. In the latter case, the plain type is wrapped with a success
value of Error or Expected<T> type to fit it into the rest of the serialization
machinery.
This patch allows us to remove the RPC unit-test workaround added in r286646.
llvm-svn: 286701
This introduces a new type-safe general purpose formatting
library. It provides compile-time type safety, does not require
a format specifier (since the type is deduced), and provides
mechanisms for extending the format capability to user defined
types, and overriding the formatting behavior for existing types.
This patch additionally adds documentation for the API to the
LLVM programmer's manual.
Mailing List Thread:
http://lists.llvm.org/pipermail/llvm-dev/2016-October/105836.html
Differential Revision: https://reviews.llvm.org/D25587
llvm-svn: 286682
This patch defines a new function to add a SectionContribs stream
to a PDB file. Unlike SectionMap, SectionContribs contains a list
of input sections as opposed to output sections.
Note that this patch needs improving because currently we do not
set Module field in SectionContribs entries. In a follow-up patch,
I'll add Modules and then fix it after that.
Differential Revision: https://reviews.llvm.org/D26210
llvm-svn: 286677
Summary:
This pass was assuming that when a PHI instruction defined a register
used by another PHI instruction that the defining insstruction would
be legalized before the using instruction.
This assumption was causing the pass to not legalize some PHI nodes
within divergent flow-control.
This fixes a bug that was uncovered by r285762.
Reviewers: nhaehnle, arsenm
Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, tony-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D26303
llvm-svn: 286676
When providing the project directory to the merge script, print it out in the
commit instructions instead of the default project directory.
llvm-svn: 286675
This implements a function annotation that disables TSan checking for the
function at run time. The benefit over attribute((no_sanitize("thread")))
is that the accesses within the callees will also be suppressed.
The motivation for this attribute is a guarantee given by the objective C
language that the calls to the reference count decrement and object
deallocation will be synchronized. To model this properly, we would need
to intercept all ref count decrement calls (which are very common in ObjC
due to use of ARC) and also every single message send. Instead, we propose
to just ignore all accesses made from within dealloc at run time. The main
downside is that this still does not introduce any synchronization, which
means we might still report false positives if the code that relies on this
synchronization is not executed from within dealloc. However, we have not seen
this in practice so far and think these cases will be very rare.
Differential Revision: https://reviews.llvm.org/D25858
llvm-svn: 286663
This is PR28376.
Unfortunately given the current structure of optimization diagnostics we
lack the capability to tell whether the user has
passed -Rpass-analysis=loop-vectorize since this is local to the
front-end (BackendConsumer::OptimizationRemarkHandler).
So rather than printing this even if the user has already
passed -Rpass-analysis, this patch just punts and stops recommending
this option. I don't think that getting this right is worth the
complexity.
Differential Revision: https://reviews.llvm.org/D26563
llvm-svn: 286662
This is a temporary fix: The right solution is to make sure addHandler can
support mutable lambdas. I'll add that in a follow-up patch.
llvm-svn: 286661
The DAG mutators in the scheduler cannot really remove DAG nodes as
additional anlysis information such as ScheduleDAGToplogicalSort are
already computed at this point and rely on a fixed number of DAG nodes.
Alleviate the missing removal with a new flag: Setting the new skip
flag on a node ignores it during scheduling.
llvm-svn: 286655
Push VRegUses/collectVRegUses() down the class hierarchy towards its
only user ScheduleDAGMILive.
NFCI: The initialization of the map happens at a later point but that
should not matter.
This is in preparation to allow DAG mutators to merge nodes, which
relies on this map getting computed later.
llvm-svn: 286654
This is a follow-up on the recent refactoring of the FunctionMerge pass.
It should fix a fail of the new FunctionComparator unittest whe compiling with MSVC.
llvm-svn: 286648
return type.
This should be fixed permanently by having the RPCUtils header recognize the
ErrorSuccess type. I'll commit that in a follow up patch.
llvm-svn: 286646
This patch corresponds to review:
https://reviews.llvm.org/D26480
Adds all the intrinsics used for various permute builtins that will
be added to altivec.h.
llvm-svn: 286638
When a function pointer is replaced with a jumptable pointer, special
case is needed to preserve the semantics of extern_weak functions.
Since a jumptable entry can not be extern_weak, we emulate that
behaviour by replacing all references to F (the extern_weak function)
with the following expression: F != nullptr ? JumpTablePtr : nullptr.
Extra special care is needed for global initializers, since most (or
probably all) backends can not lower an initializer that includes
this kind of constant expression. Initializers like that are replaced
with a global constructor (i.e. a runtime initializer).
llvm-svn: 286636
This is pure refactoring. NFC.
This change moves the FunctionComparator (together with the GlobalNumberState
utility) in to a separate file so that it can be used by other passes.
For example, the SwiftMergeFunctions pass in the Swift compiler:
https://github.com/apple/swift/blob/master/lib/LLVMPasses/LLVMMergeFunctions.cpp
Details of the change:
*) The big part is just moving code out of MergeFunctions.cpp into FunctionComparator.h/cpp
*) Make FunctionComparator member functions protected (instead of private)
so that a derived comparator class can use them.
Following refactoring helps to share code between the base FunctionComparator
class and a derived class:
*) Add a beginCompare() function
*) Move some basic function property comparisons into a separate function compareSignature()
*) Do the GEP comparison inside cmpOperations() which now has a new
needToCmpOperands reference parameter
https://reviews.llvm.org/D25385
llvm-svn: 286632
The functions getBitcodeTargetTriple(), isBitcodeContainingObjCCategory(),
getBitcodeProducerString() and hasGlobalValueSummary() now return errors
via their return value rather than via the diagnostic handler.
To make this work, re-implement these functions using non-member functions
so that they can be used without the LLVMContext required by BitcodeReader.
Differential Revision: https://reviews.llvm.org/D26532
llvm-svn: 286623
(1) Add support for function key negotiation.
The previous version of the RPC required both sides to maintain the same
enumeration for functions in the API. This means that any version skew between
the client and server would result in communication failure.
With this version of the patch functions (and serializable types) are defined
with string names, and the derived function signature strings are used to
negotiate the actual function keys (which are used for efficient call
serialization). This allows clients to connect to any server that supports a
superset of the API (based on the function signatures it supports).
(2) Add a callAsync primitive.
The callAsync primitive can be used to install a return value handler that will
run as soon as the RPC function's return value is sent back from the remote.
(3) Launch policies for RPC function handlers.
The new addHandler method, which installs handlers for RPC functions, takes two
arguments: (1) the handler itself, and (2) an optional "launch policy". When the
RPC function is called, the launch policy (if present) is invoked to actually
launch the handler. This allows the handler to be spawned on a background
thread, or added to a work list. If no launch policy is used, the handler is run
on the server thread itself. This should only be used for short-running
handlers, or entirely synchronous RPC APIs.
(4) Zero cost cross type serialization.
You can now define serialization from any type to a different "wire" type. For
example, this allows you to call an RPC function that's defined to take a
std::string while passing a StringRef argument. If a serializer from StringRef
to std::string has been defined for the channel type this will be used to
serialize the argument without having to construct a std::string instance.
This allows buffer reference types to be used as arguments to RPC calls without
requiring a copy of the buffer to be made.
llvm-svn: 286620
Summary:
Fix off-by-one indexing error in loop checking that inserted value was a
splat vector.
Add code to check that INSERT_VECTOR_ELT nodes constructing the splat
vector have the expected constant index values.
Reviewers: t.p.northover, jmolloy, mcrosier
Subscribers: aemerson, llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D26409
llvm-svn: 286616
The current implementation is emitting a global constant that happens
to evaluate to the same bytes + relocation as a jump instruction on
X86. This does not work for PIE executables and shared libraries
though, because we end up with a wrong relocation type. And it has no
chance of working on ARM/AArch64 which use different relocation types
for jump instructions (R_ARM_JUMP24) that is never generated for
data.
This change replaces the constant with module-level inline assembly
followed by a hidden declaration of the jump table. Works fine for
ARM/AArch64, but has some drawbacks.
* Extra symbols are added to the static symbol table, which inflate
the size of the unstripped binary a little. Stripped binaries are not
affected. This happens because jump table declarations must be
external (because their body is in the inline asm).
* Original functions that were anonymous are now named
<original name>.cfi, and it affects symbolization sometimes. This is
necessary because the only user of these functions is the (inline
asm) jump table, so they had to be added to @llvm.used, which does
not allow unnamed functions.
llvm-svn: 286611
This is a partial revert of r244615 (http://reviews.llvm.org/D11942),
which caused a major regression in debug info quality.
Turning the artificial __MergedGlobal symbols into private symbols
(l__MergedGlobal) means that the linker will not include them in the
symbol table of the final executable. Without a symbol table entry
dsymutil is not be able to process the debug info for any of the
merged globals and thus drops the debug info for all of them.
This patch is enabling the old behavior for all MachO targets while
leaving all other targets unaffected.
rdar://problem/29160481
https://reviews.llvm.org/D26531
llvm-svn: 286607
https://reviews.llvm.org/D26526
- Fixed DW_FORM_strp to be correctly sized and extracted for DWARF64
- Added some missing strp variants as well
- Fixed comment typo
llvm-svn: 286603
In preparation for a follow on patch that improves DWARF parsing speed, clean up DWARFFormValue so that we have can get the fixed byte size of a form value given a DWARFUnit or given the version, address byte size and dwarf32/64.
This patch cleans up code so that everyone is using one of the new DWARFFormValue functions:
static Optional<uint8_t> DWARFFormValue::getFixedByteSize(dwarf::Form Form, const DWARFUnit *U = nullptr);
static Optional<uint8_t> DWARFFormValue::getFixedByteSize(dwarf::Form Form, uint16_t Version, uint8_t AddrSize, bool Dwarf32);
This patch changes DWARFFormValue::skipValue() to rely on the output of DWARFFormValue::getFixedByteSize(...) instead of duplicating the code in each function. This will reduce the number of changes we need to make to DWARF to fewer places in DWARFFormValue when we add support for new form.
This patch also starts to support DWARF64 so that we can get correct byte sizes for forms that vary according the DWARF 32/64.
To reduce the code duplication a new FormSizeHelper pure virtual class was created that can be created as a FormSizeHelperDWARFUnit when you have a DWARFUnit, or FormSizeHelperManual where you manually specify the DWARF version, address byte size and DWARF32/DWARF64. There is now a single implementation of a function that gets the fixed byte size (instead of two where one took a DWARFUnit and one took the DWARF version, address byte size and DWARFFormat enum) and one function to skip the form values.
https://reviews.llvm.org/D26526
llvm-svn: 286597
This patch corresponds to review:
https://reviews.llvm.org/D26307
Adds all the intrinsics used for various conversion builtins that will
be added to altivec.h. These are type conversions between various types of
vectors.
llvm-svn: 286596
This adds support for the compare logical and trap (memory)
instructions that were added as part of the miscellaneous
instruction extensions feature with zEC12.
llvm-svn: 286587
This adds support for the LZRF/LZRG/LLZRGF instructions that were
added on z13, and uses them for code generation were appropriate.
SystemZDAGToDAGISel::tryRISBGZero is updated again to prefer LLZRGF
over RISBG where both would be possible.
llvm-svn: 286586
This adds support for the 31-to-64-bit zero extension instructions
LLGT and LLGTR and uses them for code generation where appropriate.
Since this operation can also be performed via RISBG, we have to
update SystemZDAGToDAGISel::tryRISBGZero so that we prefer LLGT
over RISBG in case both are possible. The patch includes some
simplification to the tryRISBGZero code; this is not intended
to cause any (further) functional change in codegen.
llvm-svn: 286585
Summary:
Split ReaderWriter.h which contains the APIs into both the BitReader and
BitWriter libraries into BitcodeReader.h and BitcodeWriter.h.
This is to address Chandler's concern about sharing the same API header
between multiple libraries (BitReader and BitWriter). That concern is
why we create a single bitcode library in our downstream build of clang,
which led to r286297 being reverted as it added a dependency that
created a cycle only when there is a single bitcode library (not two as
in upstream).
Reviewers: mehdi_amini
Subscribers: dlj, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D26502
llvm-svn: 286566
This is forcing to use Error::success(), which is in a wide majority
of cases a lot more readable.
Differential Revision: https://reviews.llvm.org/D26481
llvm-svn: 286561
This is a replacement to binutils' string tool. It prints strings found in a
binary (object file, executable, or archive library). It is rather bare and
not functionally equivalent, however, it lays the groundwork necessary for the
strings tool, enabling iterative development of features to reach feature
parity.
llvm-svn: 286556
addSchedBarrierDeps() is supposed to add use operands to the ExitSU
node. The current implementation adds uses for calls/barrier instruction
and the MBB live-outs in all other cases. The use
operands of conditional jump instructions were missed.
Also added code to macrofusion to set the latencies between nodes to
zero to avoid problems with the fusing nodes lingering around in the
pending list now.
Differential Revision: https://reviews.llvm.org/D25140
llvm-svn: 286544
When a function is inlined, each instance is optimized in their own
inlining context. This can produce different remarks all pointing to
the same source line.
This adds a new column on the source view to display the inlining
context.
llvm-svn: 286537
There is no need to track dependencies for constant physregs, as they
don't change their value no matter in what order you read/write to them.
Differential Revision: https://reviews.llvm.org/D26221
llvm-svn: 286526
The NamedRegionTimer initializer without a group name puts the Timer
into the "Misc" group and is (nearly) unused. Remove it.
The only user of this constructor appears to be the HexagonGenInsert pass,
which creates a counter without group to count the complete execution
time of that pass, however since every pass gets a counter by the
PassManager anyway this should be unnecessary. Also removed the
pointless TimerGroup there.
Differential Revision: https://reviews.llvm.org/D25582
llvm-svn: 286524
The generic infrastructure to compute the Newton series for reciprocal and
reciprocal square root was conceived to allow a target to compute the series
itself. However, the original code did not properly consider this condition
if returned by a target. This patch addresses the issues to allow a target
to compute the series on its own.
Differential revision: https://reviews.llvm.org/D22975
llvm-svn: 286523
If the inrange keyword is present before any index, loading from or
storing to any pointer derived from the getelementptr has undefined
behavior if the load or store would access memory outside of the bounds of
the element selected by the index marked as inrange.
This can be used, e.g. for alias analysis or to split globals at element
boundaries where beneficial.
As previously proposed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-July/102472.html
Differential Revision: https://reviews.llvm.org/D22793
llvm-svn: 286514
When copying to/from a constant register interferences can be ignored.
Also update the documentation for isConstantPhysReg() to make it more
obvious that this transformation is valid.
Differential Revision: https://reviews.llvm.org/D26106
llvm-svn: 286503
Currently runtime metadata is emitted as an ELF section with name .AMDGPU.runtime_metadata.
However there is a standard way to convey vendor specific information about how to run an ELF binary, which is called vendor-specific note element (http://www.netbsd.org/docs/kernel/elf-notes.html).
This patch lets AMDGPU backend emits runtime metadata as a note element in .note section.
Differential Revision: https://reviews.llvm.org/D25781
llvm-svn: 286502
This makes it possible to indent a binary blob by a certain
number of bytes, and also makes some things more idiomatic.
Finally, it integrates this binary blob formatter into ScopedPrinter
which used to have its own implementation of this algorithm.
Differential Revision: https://reviews.llvm.org/D26477
llvm-svn: 286495
The r283656 did this in the remark arguments. We also need to do this
in the main function attribute as that is written to YAML as well.
llvm-svn: 286482
This restores the part of r286297 that didn't require adding a
dependency from the Analysis to Object library. There are two parts
to the original fix, and this will address the handling for the case
where locals are used in module level asm.
The part that requires functionality in libObject handles local defs
in module level asm, and was reverted because our downstream build
of clang builds lib/Bitcode into a single library, and this new
dependency introduced a cycle there. I am trying to get that fixed
(see D26502), so for now that change isn't being restored
llvm-svn: 286475