Do not force the backends to use target name as namespace.
Original patch by Mattias Eriksson
Reviewers: stoklund, craig.topper
Reviewed By: stoklund
Subscribers: materi, llvm-commits
Differential Revision: https://reviews.llvm.org/D31322
llvm-svn: 298834
Instead of std::atomic APIs for atomic operations, we instead use APIs
include with sanitizer_common. This allows us to, at runtime, not have
to depend on potentially dynamically provided implementations of these
atomic operations.
Fixes http://llvm.org/PR32274.
llvm-svn: 298833
This patch calls getAddend on a relocation only when the relocation is RELA.
That doesn't really improve runtime performance but should improve
readability as the code now matches the function description.
llvm-svn: 298828
Summary:
During post-commit review of a previous change I made it was pointed out that const casting 'this' is technically a bad practice. This patch re-implements all of the methods in BasicBlock that do this to use the const BasicBlock version and const_cast the return value instead.
I think there are still many other classes that do similar things. I may look at more in the future.
Reviewers: dblaikie
Reviewed By: dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31377
llvm-svn: 298827
This is failing on some of our internal bots because we're using different symbolizers. It doesn't seem important and we never test for column numbers in any other tests, so let's just remove it.
Differential Revision: https://reviews.llvm.org/D30122
llvm-svn: 298822
While it's usually a bug to call GCD APIs, such as dispatch_after, with NULL as a queue, this often "somehow" works and TSan should maintain binary compatibility with existing code. This patch makes sure we don't try to call Acquire and Release on NULL queues, and add one such testcase for dispatch_after.
Differential Revision: https://reviews.llvm.org/D31355
llvm-svn: 298820
Previously, computeAddend had many parameters but most of them were
used only for MIPS. The MIPS ABI is too odd that I don't want to mix
it into the regular code path. Splitting the function into non-MIPS
and MIPS parts makes the regular code path easy to follow.
llvm-svn: 298817
We're seeing binutils ld produce binaries where the import address
table's NameRVA entry is actually a VA instead (i.e. it's already base
relocated), which llvm-readobj then chokes on. Both dumpbin and the
Windows loader are able to handle these binaries correctly, however, and
we can make llvm-readobj handle them correctly too by iterating the
import lookup table (which doesn't have a relocated NameRVA) rather than
the import address table.
The import lookup table and the import address table are supposed to be
identical on disk, and prior to r277298 the import lookup table would be
used by `llvm-readobj -coff-imports` anyway, so this shouldn't have any
functional change (except in the case of our malformed binaries). The
import lookup table can apparently be missing when using old Borland
linkers, so fall back to the import address table in that case.
Resolves PR31766.
Differential Revision: https://reviews.llvm.org/D31362
llvm-svn: 298812
Summary:
Add basic OpenBSD support. This is enough to be able to analyze core dumps for OpenBSD/amd64, OpenBSD/arm, OpenBSD/arm64 and OpenBSD/i386.
Note that part of the changes to source/Plugins/ObjectFile/ELF/ObjectFileELF.cpp fix a bug that probably affects other platforms as well. The GetProgramHeaderByIndex() interface use 1-based indices, but in some case when looping over the headers the, the loop starts at 0 and misses the last header. This caused problems on OpenBSD since OpenBSD core dumps have the PT_NOTE segment as the last program header.
Reviewers: joerg, labath, krytarowski
Reviewed By: krytarowski
Subscribers: aemerson, emaste, rengolin, srhines, krytarowski, mgorny, lldb-commits
Tags: #lldb
Differential Revision: https://reviews.llvm.org/D31131
llvm-svn: 298810
There are several problems with the current annotations (AnnotateRWLockCreate and friends):
- they don't fully support deadlock detection (we need a hook _before_ mutex lock)
- they don't support insertion of random artificial delays to perturb execution (again we need a hook _before_ mutex lock)
- they don't support setting extended mutex attributes like read/write reentrancy (only "linker init" was bolted on)
- they don't support setting mutex attributes if a mutex don't have a "constructor" (e.g. static, Java, Go mutexes)
- they don't ignore synchronization inside of lock/unlock operations which leads to slowdown and false negatives
The new annotations solve of the above problems. See tsan_interface.h for the interface specification and comments.
Reviewed in https://reviews.llvm.org/D31093
llvm-svn: 298809
instead of `Ty`.
The `Ty` suffix is much more commonly used for LLVM `Type` variable
names, so this seemed like a particularly confusing collision.
llvm-svn: 298808
Fixed -verify-machineinstrs errors in fast-isel-select-sse.ll (one of many in PR27481)
The VMOVSSZrr/VMOVSSZrrk and VMOVSDZrr/VMOVSDZrrk instructions were assuming both source registers were V128X when the second is actually supposed to be FR32X/FR64X
Differential Revision: https://reviews.llvm.org/D31200
llvm-svn: 298805
The first variant contains all current transformations except
transforming switches into lookup tables. The second variant
contains all current transformations.
The switch-to-lookup-table conversion results in code that is more
difficult to analyze and optimize by other passes. Most importantly,
it can inhibit Dead Code Elimination. As such it is often beneficial to
only apply this transformation very late. A common example is inlining,
which can often result in range restrictions for the switch expression.
Changes in execution time according to LNT:
SingleSource/Benchmarks/Misc/fp-convert +3.03%
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk -11.20%
MultiSource/Benchmarks/Olden/perimeter/perimeter -10.43%
and a couple of smaller changes. For perimeter it also results 2.6%
a smaller binary.
Differential Revision: https://reviews.llvm.org/D30333
llvm-svn: 298799
This moves it to the iterator facade utilities giving it full random
access semantics, etc. It can also now be used with standard algorithms
like std::all_of and std::any_of and range adaptors like llvm::reverse.
Also make the semantics of iterating match what every other iterator
uses and forbid decrementing past the begin iterator. This was used as
a hacky way to work around iterator invalidation. However, every
instance trying to do this failed to actually avoid touching invalid
iterators despite the clear documentation that the removed and all
subsequent iterators become invalid including the end iterator. So I've
added a return of the next iterator to removeCase and rewritten the
loops that were doing this to correctly follow the iterator pattern of
either incremneting or removing and assigning fresh values to the
iterator and the end.
In one case we were trying to go backwards to make this cleaner but it
doesn't actually work. I've made that code match the code we use
everywhere else to remove cases as we iterate. This changes the order of
cases in one test output and I moved that test to CHECK-DAG so it
wouldn't care -- the order isn't semantically meaningful anyways.
llvm-svn: 298791
C is short for Chunk, but we are no longer using that term.
RI is probably short for relocation iterator, but this is not an interator.
llvm-svn: 298786
Previously, relocation offsets are recalculated for .eh_frame sections
inside the main loop, and that messed up the main loop. This patch
separates that logic into a dedicated class.
llvm-svn: 298785