Summary:
Fixes an issue where a test attempts to use -mcpu=cortex-a15 on non-ARM targets.
This triggers an assertion on MIPS since it doesn't know what ABI to use by default for
unrecognized processors.
Reviewers: rengolin
Reviewed By: rengolin
CC: llvm-commits, aemerson, rengolin
Differential Revision: http://llvm-reviews.chandlerc.com/D2876
llvm-svn: 202256
Summary:
This should fix the MCJIT unit tests that were broken by r201792 on the MIPS buildbot.
MIPS currently uses the default implementation of sys::getHostCPUName() which
always returns "generic". For now, we will accept "generic" and coerce it to
"mips32" or "mips64" depending on the target architecture like we do for empty
CPU names.
Reviewers: jacksprat, matheusalmeida
Reviewed By: jacksprat
Differential Revision: http://llvm-reviews.chandlerc.com/D2878
llvm-svn: 202253
If all input files are compatible with Structured Exception Handling, linker
is supposed to create an exectuable with a table for SEH handlers. The table
consists of exception handlers entry point addresses.
The basic idea of SEH in x86 Microsoft ABI is to list all valid entry points
of exception handlers in an read-only memory, so that an attacker cannot
override the addresses in it. In x86 ABI, data for exception handling is mostly
on stack, so it's volnerable to stack overflow attack. In order to protect
against it, Windows runtime uses the table to check a return address, to
ensure that the address is really an valid entry point for an exception handler.
Compiler emits a list of exception handler functions to .sxdata section. It
also emits a marker symbol "@feat.00" to indicate that the object is compatible
with SEH. SEH is a relatively new feature for COFF, and mixing SEH-compatible
and SEH-incompatible objects will result in an invalid executable, so is the
marker.
If all input files are compatible with SEH, LLD emits a SEH table. SEH table
needs to be pointed by Load Configuration strucutre, so when emitting a SEH
table LLD emits it too. The address of a Load Configuration will be stored to
the file header.
llvm-svn: 202248
the default.
Based on the patch by Matt Arsenault, D1764!
I switched one place to use the more direct pointer type to compute the
desired address space, and I reworked the memcpy rewriting section to
reflect significant refactorings that this patch helped inspire.
Thanks to several of the folks who helped review and improve the patch
as well.
llvm-svn: 202247
Bug fix for pr18841:
http://llvm.org/bugs/show_bug.cgi?id=18841
This change creates a stub Python readline.so module that does almost
nothing. Its whole purpose is to prevent Python from loading the real
module, something it does during the embedded Python interpreter's
initialization sequence (and way before lldb ever requests it within
embedded_interpreter.py).
On Ubuntu 12.04 and 13.10 x86_64, and in the Python 2.7.6 tree, the
stock Python readline module links against the GNU readline library.
This appears to be the case on all Pythons except where __APPLE__ is
defined. LLDB now requires linking against the libedit library.
Something about having both libedit.so and libreadline.so linked into
the same process space is causing the Python readline.so to trigger a
NULL memory access. I have put in a separate patch to python.org.
This suppression of embedded interpreter readline support can be
removed if at least any one of the following happens:
1. The stock python distribution accepts a patch similar to what I
submitted to Python 2.7.6's Modules/readline.c file.
2. The stock python distribution implements Modules/readline.c in
terms of libedit's readline compatibility mode (i.e. essentially
compiles it the way __APPLE__ compiles that module) under Linux.
3. a clean-room implementation of the python readline module is
implemented against libedit (either readline compatibility mode or
native libedit). This could be implemented within the readline.cpp
file that this change introduces. It cannot be a fork of python's
readline.c module due to llvm licensing.
The net effect of this change on Linux is that the embedded python's
readline support will not exist.
llvm-svn: 202243
to work independently for the slice side and the other side.
This allows us to only compute the minimum of the two when we actually
rewrite to a memcpy that needs to take the minimum, and preserve higher
alignment for one side or the other when rewriting to loads and stores.
This fix was inspired by seeing the result of some refactoring that
makes addrspace handling better.
llvm-svn: 202242
target_link_libraries(INTERFACE) doesn't bring inter-target dependencies in add_library,
although final targets have dependencies to whole dependent libraries.
It makes most libraries can be built in parallel.
target_link_libraries(PRIVATE) is used to shaared library.
Each dependent library is linked to the target.so, and its user will not see its grandchildren.
For example,
- libclang.so has sufficient libclang*.a(s).
- c-index-test requires just only libclang.so.
FIXME: lld is tweaked minimally. Adding INTERFACE in each library would be better thing.
llvm-svn: 202241
For now, use both keywords, INTERFACE and PRIVATE via the variable,
- ${cmake_2_8_12_INTERFACE}
- ${cmake_2_8_12_PRIVATE}
They could be cleaned up when we introduce 2.8.12.
llvm-svn: 202239
D1764, which in turn set off the other refactorings to make
'getSliceAlign()' a sensible thing.
There are two possible inputs to the required alignment of a memory
transfer intrinsic: the alignment constraints of the source and the
destination. If we are *only* introducing a (potentially new) offset
onto one side of the transfer, we don't need to consider the alignment
constraints of the other side. Use this to simplify the logic feeding
into alignment computation for unsplit transfers.
Also, hoist the clamp of the magical zero alignment for these intrinsics
to the more customary one alignment early. This lets several other
conditions melt away.
No functionality changed. There is a further improvement this exposes
which *will* change functionality, but that's arriving in a separate
patch.
llvm-svn: 202232
rewriting logic: don't pass custom offsets for the adjusted pointer to
the new alloca.
We always passed NewBeginOffset here. Sometimes we spelled it
BeginOffset, but only when they were in fact equal. Whats worse, the API
is set up so that you can't reasonably call it with anything else -- it
assumes that you're passing it an offset relative to the *original*
alloca that happens to fall within the new one. That's the whole point
of NewBeginOffset, it's the clamped beginning offset.
No functionality changed.
llvm-svn: 202231
alignment of the slice being rewritten, not any arbitrary offset.
Every caller is really just trying to compute the alignment for the
whole slice, never for some arbitrary alignment. They are also just
passing a type when they have one to see if we can skip an explicit
alignment in the IR by using the type's alignment. This makes for a much
simpler interface.
Another refactoring inspired by the addrspace patch for SROA, although
only loosely related.
llvm-svn: 202230
consistency with memcpy rewriting, and fix a latent bug in the alignment
management for memset.
The alignment issue is that getAdjustedAllocaPtr is computing the
*relative* offset into the new alloca, but the alignment isn't being set
to the relative offset, it was using the the absolute offset which is
into the old alloca.
I don't think its possible to write a test case that actually reaches
this code where the resulting alignment would be observably different,
but the intent was clearly to use the relative offset within the new
alloca.
llvm-svn: 202229
rather than passing them as arguments.
While I generally prefer actual arguments, in this case the readability
loss is substantial. By using members we avoid repeatedly calculating
the offsets, and once we're using members it is useful to ensure that
those names *always* refer to the original-alloca-relative new offset
for a rewritten slice.
No functionality changed. Follow-up refactoring, all toward getting the
address space patch merged.
llvm-svn: 202228
slice being rewritten.
We had the same code scattered across most of the visits. Instead,
compute the new offsets and the slice size once when we start to visit
a particular slice, and use the member variables from then on. This
reduces quite a bit of code duplication.
No functionality changed. Refactoring inspired to make it easier to
apply the address space patch to SROA.
llvm-svn: 202227
checking in SROA.
The primary change is to just rely on uge for checking that the offset
is within the allocation size. This removes the explicit checks against
isNegative which were terribly error prone (including the reversed logic
that led to PR18615) and prevented us from supporting stack allocations
larger than half the address space.... Ok, so maybe the latter isn't
*common* but it's a silly restriction to have.
Also, we used to try to support a PHI node which loaded from before the
start of the allocation if any of the loaded bytes were within the
allocation. This doesn't make any sense, we have never really supported
loading or storing *before* the allocation starts. The simplified logic
just doesn't care.
We continue to allow loading past the end of the allocation in part to
support cases where there is a PHI and some loads are larger than others
and the larger ones reach past the end of the allocation. We could solve
this a different and more conservative way, but I'm still somewhat
paranoid about this.
llvm-svn: 202224
null comparison when the pointer is known to be non-null.
This catches the array to pointer decay, function to pointer decay and
address of variables. This does not catch address of function since this
has been previously used to silence a warning.
Pointer to bool conversion is under -Wbool-conversion.
Pointer to null comparison is under -Wtautological-pointer-compare, a sub-group
of -Wtautological-compare.
void foo() {
int arr[5];
int x;
// warn on these conditionals
if (foo);
if (arr);
if (&x);
if (foo == null);
if (arr == null);
if (&x == null);
if (&foo); // no warning
}
llvm-svn: 202216
For now, just ignore them. Later, we could try looking through LazyCompoundVals,
but we at least shouldn't crash.
<rdar://problem/16153464>
llvm-svn: 202212
The warnings fall into three groups.
1) Using an absolute value function of the wrong type, for instance, using the
int absolute value function when the argument is a floating point type.
2) Using the improper sized absolute value function, for instance, using abs
when the argument is a long long. llabs should be used instead.
From these two cases, an implicit conversion will occur which may cause
unexpected behavior. Where possible, suggest the proper absolute value
function to use, and which header to include if the function is not available.
3) Taking the absolute value of an unsigned value. In addition to this warning,
suggest to remove the function call. This usually indicates a logic error
since the programmer assumed negative values would have been possible.
llvm-svn: 202211
The original text is very terse, so I've expanded on it.
Specifically, in the original text:
* "The selector value is a positive number if the exception matched a
type info" -- It wasn't clear that this meant "if the exception
matched a 'catch' clause".
* "If nothing is matched, the behavior of the program is
`undefined`_." -- It's actually implementation-defined in C++
rather than undefined, as the new text explains.
llvm-svn: 202209