Summary:
In case of Dwo, DIERef stores a compile unit offset in the main object file, and not in the dwo.
The implementation of SymbolFileDWARFDwo::GetDIE inherited from SymbolFileDWARF tried to lookup
the compilation unit in the DWO based on the main object file offset (and failed). I change the
implementation to verify the DIERef indeed references compile unit belonging to this dwo and then
lookup the die based on the die offset alone.
Includes a couple of fixes for mismatched struct/class tags.
Reviewers: tberghammer, clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18646
llvm-svn: 265011
Summary:
The bug was that microMIPS's [ls]w[lr]e instructions claimed to support a
12-bit offset when it is only 9-bit.
Reviewers: vkalintiris
Subscribers: llvm-commits, dsanders
Differential Revision: http://reviews.llvm.org/D18434
llvm-svn: 265010
We have to check the final value that is written.
I don't think this has any real word implications (unless something
supports unaligned instructions), but unblocks simplifying the handling
of PC relative relocations.
llvm-svn: 265009
It is not widely used and removed from OpenCL v2.1.
This change modifies Clang to parse the attribute for OpenCL
but ignores it afterwards.
Patch by Liu Yaxun (Sam)!
Differential Revision: http://reviews.llvm.org/D17861
llvm-svn: 265006
PPC has a vector popcount, this lets the vectorizer use the correct cost
for it. Tweak X86 test to use an intrinsic that's actually scalarized (we
have a somewhat efficient lowering for vector popcount using SSE, the
cost model finds that now).
llvm-svn: 265005
* LLVMDisposeMessage lives in llvm-c/Core.h, include this file where necessary
* LLVMAddTargetData has been removed, follow suit in the bindings
Differential Revision: http://reviews.llvm.org/D18633
llvm-svn: 265001
alignment on Darwin.
Itanium C++ ABI specifies that _Unwind_Exception should be double-word
aligned (16B). To conform to the ABI, libraries implementing exception
handling declare the struct with __attribute__((aligned)), which aligns
the unwindHeader field (and the end of __cxa_exception) to the default
target alignment (which is typically 16-bytes).
struct __cxa_exception {
...
// struct is declared with __attribute__((aligned)).
_Unwind_Exception unwindHeader;
};
Based on the assumption that _Unwind_Exception is declared with
__attribute__((aligned)), ItaniumCXXABI::getAlignmentOfExnObject returns
the target default alignment for __attribute__((aligned)). It turns out
that libc++abi, which is used on Darwin, doesn't declare the struct with
the attribute and therefore doesn't guarantee that unwindHeader is
aligned to the alignment specified by the ABI, which in some cases
causes the program to crash because of unaligned memory accesses.
This commit avoids crashes due to unaligned memory accesses by having
getAlignmentOfExnObject return an 8-byte alignment on Darwin. I've only
fixed the problem for Darwin, but we should also figure out whether other
platforms using libc++abi need similar fixes.
rdar://problem/25314277
Differential revision: http://reviews.llvm.org/D18479
llvm-svn: 264998
This way once we teach MatchBinaryOp to map more things into arithmetic,
the non-wrapping add recurrence construction would understand it too.
Right now MatchBinaryOp still only understands arithmetic, so this is
solely a code-reorganization change.
llvm-svn: 264994
This warning sometimes will infinitely recurse on CXXRecordDecl's from
ill-formed recursive classes that have fields of themselves. Skip processing
these classes to prevent this from happening.
Fixes https://llvm.org/bugs/show_bug.cgi?id=27142
llvm-svn: 264991
map's allocator may only be used to construct objects of 'value_type',
or in this case 'pair<const Key, Value>'. In order to respect this requirement
in operator[], which requires default constructing the 'mapped_type', we have
to use pair's piecewise constructor with '(tuple<Kep>, tuple<>)'.
Unfortunately we still need to provide a fallback implementation for C++03
since we don't have <tuple>. Even worse this fallback is the last remaining
user of '__hash_map_node_destructor' and '__construct_node_with_key'.
This patch also switches try_emplace over to __tree.__emplace_unique_key_args.
llvm-svn: 264989
When dealing with complex<float>, and similar structures with two
single-precision floating-point numbers, especially when such things are being
passed around by value, we'll sometimes end up loading both float values by
extracting them from one 64-bit integer load. It looks like this:
t13: i64,ch = load<LD8[%ref.tmp]> t0, t6, undef:i64
t16: i64 = srl t13, Constant:i32<32>
t17: i32 = truncate t16
t18: f32 = bitcast t17
t19: i32 = truncate t13
t20: f32 = bitcast t19
The problem, especially before the P8 where those bitcasts aren't legal (and
get expanded via the stack), is that it would have been better to use two
floating-point loads directly. Here we add a target-specific DAGCombine to do
just that. In short, we turn:
ld 3, 0(5)
stw 3, -8(1)
rldicl 3, 3, 32, 32
stw 3, -4(1)
lfs 3, -4(1)
lfs 0, -8(1)
into:
lfs 3, 4(5)
lfs 0, 0(5)
llvm-svn: 264988
This patch is fairly large and contains a number of changes. The changes all work towards
allowing __tree to properly handle __value_type esspecially when inserting into the __tree.
I chose not to break this change into smaller patches because it wouldn't be possible to
write meaningful standard-compliant tests for each patch.
It is very similar to r260513 "[libcxx] Teach __hash_table how to handle unordered_map's __hash_value_type".
Changes in <map>
* Remove __value_type's constructors because it should never be constructed directly.
* Make map::emplace and multimap::emplace forward to __tree and remove the old definitions
* Remove "__construct_node" map and multimap member functions. Almost all of the construction is done within __tree.
* Fix map's move constructor to access "__value_type.__nc" directly and pass this object to __tree::insert.
Changes in <__tree>
* Add traits to detect, handle, and unwrap, map's "__value_type".
* Convert methods taking "value_type" to take "__container_value_type" instead. Previously these methods caused
unwanted implicit conversions from "std::pair<Key, Value>" to "__value_type<Key, Value>".
* Delete __tree_node and __tree_node_base's constructors and assignment operators. The node types should never be constructed
because the "__value_" member of __tree_node must be constructed directly by the allocator.
* Make the __tree_node_destructor class and "__construct_node" methods unwrap "__node_value_type" into "__container_value_type" before invoking the allocator. The user's allocator can only be used to construct and destroy the container's value_type. Passing it map's "__value_type" was incorrect.
* Cleanup the "__insert" and "__emplace" methods. Have __insert forward to an __emplace function wherever possible to reduce
code duplication. __insert_unique(value_type const&) and __insert_unique(value_type&&) forward to __emplace_unique_key_args.
These functions will not allocate a new node if the value is already in the tree.
* Change the __find* functions to take the "key_type" directly instead of passing in "value_type" and unwrapping the key later.
This change allows the find functions to be used without having to construct a "value_type" first. This allows for a number
of optimizations.
* Teach __move_assign and __assign_multi methods to unwrap map's __value_type.
llvm-svn: 264986
The test case was defining and using a function 'notExported()', but
the FileCheck checks were checking for the name 'not_exported'. This
changes the test to use 'notExported' across the board. Also, the test
defined a function 'not_defined()', but doesn't have any checks related
to it. For consistency, this name is changed to 'notDefined'. A later
commit will add checks for 'notDefined'.
Patch by Warren Ristow!
llvm-svn: 264984
This reverts commit r264945.
The commit only removed an unreachable in a method with a covered switch, but
GCC is likely to warn on this, and the coding standards recommend just leaving
in the unreachable.
llvm-svn: 264983
make_dynamic_error_code was used to create a std::error_code with
a std::string message. Now that we are migrating to llvm::Error,
there are no calls to these make_dynamic_error_code methods.
There is one single call to make_dynamic_error_code remaining, the one
inside GenericError::convertToErrorCode(). That method is only called
from File::doParse() which should be a temporary situation. We need
to work out how to deal with File::parse() caching the error result from
doParse(). Caching errors isn't supported in the new scheme, and probably
isn't needed here, but we need to work that out.
Once thats done, dynamic error and all utilities around it can be deleted.
llvm-svn: 264982
These methods weren't really throwing errors. The only error used
was that a file could not be found, which isn't really an error at all
as we are searching paths and libraries for a file. All of the callers
also ignored errors and just used the returned path if one was available.
Changing to return Optional<StringRef> as that actually reflects what
we are trying to do here: optionally find a given path.
llvm-svn: 264979
Summary:
As discussed on llvm-dev[1].
This change adds the basic boilerplate code around having this intrinsic
in LLVM:
- Changes in Intrinsics.td, and the IR Verifier
- A lowering pass to lower @llvm.experimental.guard to normal
control flow
- Inliner support
[1]: http://lists.llvm.org/pipermail/llvm-dev/2016-February/095523.html
Reviewers: reames, atrick, chandlerc, rnk, JosephTremoulet, echristo
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18527
llvm-svn: 264976
In some cases, when we encounter a direct function call with an
incorrect number of arguments, we'll emit a diagnostic, and pretend that
the call to the function was valid. For example, in C:
int foo();
int a = foo(1);
Prior to this patch, we'd get an ICE if foo had an enable_if attribute,
because CheckEnableIf assumes that the number of arguments it gets
passed is valid for the function it's passed. Now, we check that the
number of args looks valid prior to checking enable_if conditions.
This fix was not done inside of CheckEnableIf because the problem
presently can only occur in one caller of CheckEnableIf (ActOnCallExpr).
Additionally, checking inside of CheckEnableIf would make us emit
multiple diagnostics for the same error (one "enable_if failed", one
"you gave this function the wrong number of arguments"), which seems
worse than just complaining about the latter.
llvm-svn: 264975
These methods were responsible for some of the few remaining calls
to llvm::errorCodeToError. Converting them makes us have more Error's
in the api and fewer error_code's.
llvm-svn: 264974
The current ModuleDependencyCollector has a AST listener to collect
header files present in loaded modules, but this isn't enough to collect
all headers needed in the crash reproducer. One of the reasons is that
the AST writer doesn't write symbolic link header paths in the pcm modules,
this makes the listeners on the reader only able to collect the real files.
Since the module maps could contain submodules that use headers which
are symbolic links, not collecting those forbid the reproducer scripts
to regen the modules.
For instance:
usr/include/module.map:
...
module pthread {
header "pthread.h"
export *
module impl {
header "pthread_impl.h"
export *
}
}
...
usr/include/pthread/pthread_impl.h
usr/include/pthread_impl.h -> pthread/pthread_impl.h
The AST dump for the module above:
<SUBMODULE_HEADER abbrevid=6/> blob data = 'pthread_impl.h'
<SUBMODULE_TOPHEADER abbrevid=7/> blob data = '/<path_to_sdk>/usr/include/pthread/pthread_impl.h'
Note that we don't have "usr/include/pthread_impl.h" which is requested
by the module.map in case we want to reconstruct the module in the
reproducer. The reason the original symbolic link path isn't used is
because the headers are kept by name and requested through the
FileManager, which unique files and returns the real path only.
To fix that, add a callback to be invoked everytime a header is added
while parsing module maps and hook that up to the module dependecy
collector. This callback is only registered when generating the
reproducer.
Differential Revision: http://reviews.llvm.org/D18585
rdar://problem/24499339
llvm-svn: 264971
The VFS YAML files contain empty directory entries to describe that it's
returning from a subdirectory before describing new files in the parent.
In the future, we should properly sort and write YAML files avoiding
such empty dirs and mitigate the extra recurson cost. However, since
this is used by previous existing YAMLs, make the traversal work in
their presence.
rdar://problem/24499339
llvm-svn: 264970
We already have this flag in most of the file, but we need it everywhere
else, to disable the NVVMReflect pass, which we're explicitly checking
doesn't run here. (Upcoming changes to llvm will cause it to be run.)
llvm-svn: 264969
Summary:
__atomic_load's allows it's first argument to be a pointer to a const type. However the second argument is an output parameter and must be a pointer to non-const.
This patch fixes the signature of __atomic_load generated by clang so that it respects the above requirements.
Reviewers: rsmith, majnemer
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D13420
llvm-svn: 264967
The size savings are significant, and from what I can tell, both ICC and GCC do this.
Differential Revision: http://reviews.llvm.org/D18573
llvm-svn: 264966
Summary:
This prevents errors when you invoke clang with a flag that the NVPTX
toolchain doesn't support. For example, on x86-64,
clang -mthread-model single -x c++ /dev/null -o /dev/null
should output just one error about "invalid thread model 'single' in
'-mthread-model single' for this target"; x86-64 doesn't support
-mthread-model, but we shouldn't also instantiate a NVPTX target!
Reviewers: echristo
Subscribers: tra, sunfish, cfe-commits
Differential Revision: http://reviews.llvm.org/D18629
llvm-svn: 264965
With this patch, by a constexpr function is implicitly host+device
unless:
a) it's a variadic function (variadic functions are not allowed on the
device side), or
b) it's preceeded by a __device__ overload in a system header.
The restriction on overloading __host__ __device__ functions on the
basis of their CUDA attributes remains in place, but we use (b) to allow
us to define __device__ overloads for constexpr functions in cmath,
which would otherwise be __host__ __device__ and thus not overloadable.
You can disable this behavior with -fno-cuda-host-device-constexpr.
Reviewers: tra, rnk, rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18380
llvm-svn: 264964
Summary:
This is necessary for a future patch which will make all constexpr
functions implicitly host+device. cmath may declare constexpr
functions, but these we do *not* want to be host+device. The forward
declares added in this patch prevent this (because the rule will be,
constexpr functions become implicitly host+device unless they're
preceeded by a decl with __device__).
Reviewers: tra
Subscribers: cfe-commits, rnk, rsmith
Differential Revision: http://reviews.llvm.org/D18539
llvm-svn: 264963