- add sincos to runtime library if target triple environment is GNU
- added canCombineSinCosLibcall() which checks that sincos is in the RTL and
if the environment is GNU then unsafe fpmath is enabled (required to
preserve errno)
- extended sincos-opt lit test
Reviewed by: Hal Finkel
llvm-svn: 175283
Several functions and variable names used the term 'tree' to refer
to what is actually a DAG. Correcting this mistake will, hopefully,
prevent confusion in the future.
No functionality change intended.
llvm-svn: 175278
This got lost and was untested as the same effect is achieved by
avoiding bin packing, which is active in Google style by default.
However, moving forward, we want more control over the bin packing
option(s) and thus, this flag should work as expected.
llvm-svn: 175277
MaybeReexec() does now a tricky job to manage DYLD_INSERT_LIBRARIES in a safe way.
Because we're using library interposition, it's critical for an instrumented app
to be executed with the runtime library present in DYLD_INSERT_LIBRARIES list.
Therefore if it's initially missing in that list, we append the runtime library name
to the value of DYLD_INSERT_LIBRARIES and then exec() ourselves.
On the other hand, some of the apps exec()ed by our program may not want to have
ASan runtime library preloaded, so we remove the runtime library from the
DYLD_INSERT_LIBRARIES if it's already there.
Users may want to preload other libraries using DYLD_INSERT_LIBRARIES, so we preserve those.
llvm-svn: 175276
It enables to work with a smaller constant, which is target friendly for those which can compare to immediates.
It also avoids inserting a shift in favor of a trunc, which can be free on some targets.
This used to work until LLVM-3.1, but regressed with the 3.2 release.
llvm-svn: 175270
This is essentially a stripped-down version of the ConstandIslands pass (which
always had these two functions), providing just the features necessary for
correctness.
In particular there needs to be a way to resolve the situation where a
conditional branch's destination block ends up out of range.
This issue crops up when self-hosting for AArch64.
llvm-svn: 175269
blocks. We still don't have consensus if we should try to change clang or
the standard, but llvm should work with compilers that implement the current
standard and mangle those functions.
llvm-svn: 175267
When prelink is installed in the system, prelink-ed
libraries map between 0x003000000000 and 0x004000000000 thus occupying the shadow Gap,
so we need so split the address space even further, like this:
|| [0x10007fff8000, 0x7fffffffffff] || HighMem ||
|| [0x02008fff7000, 0x10007fff7fff] || HighShadow ||
|| [0x004000000000, 0x02008fff6fff] || ShadowGap3 ||
|| [0x003000000000, 0x003fffffffff] || MidMem ||
|| [0x00087fff8000, 0x002fffffffff] || ShadowGap2 ||
|| [0x00067fff8000, 0x00087fff7fff] || MidShadow ||
|| [0x00008fff7000, 0x00067fff7fff] || ShadowGap ||
|| [0x00007fff8000, 0x00008fff6fff] || LowShadow ||
|| [0x000000000000, 0x00007fff7fff] || LowMem ||
Do it only if necessary.
Also added a bit of profiling code to make sure that the
mapping code is efficient.
Added a lit test to simulate prelink-ed libraries.
Unfortunately, this test does not work with binutils-gold linker.
If gold is the default linker the test silently passes.
Also replaced
__has_feature(address_sanitizer)
with
__has_feature(address_sanitizer) || defined(__SANITIZE_ADDRESS__)
in two places.
Patch partially by Jakub Jelinek.
llvm-svn: 175263
This is almost always more readable.
Before:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
? aaaaaaaaaaaaaaaaaaaaaaaaaaa : aaaaaaaaaaaaaaaaaaaaaaaaaaa;
After:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
? aaaaaaaaaaaaaaaaaaaaaaaaaaa
: aaaaaaaaaaaaaaaaaaaaaaaaaaa;
llvm-svn: 175262
This implements the review suggestion to simplify the AArch64 backend. If we
later discover that we *really* need the extra complexity of the
ConstantIslands pass for performance reasons it can be resurrected.
llvm-svn: 175258
In the near future litpools will be in a different section, which means that
any access to them is at least two instructions. This makes the case for a
movz/movk pair (if total offset <= 32-bits) even more compelling.
llvm-svn: 175257
For some basic blocks, it is possible to generate many candidate pairs for
relatively few pairable instructions. When many (tens of thousands) of these pairs
are generated for a single instruction group, the time taken to generate and
rank the different vectorization plans can become quite large. As a result, we now
cap the number of candidate pairs within each instruction group. This is done by
closing out the group once the threshold is reached (set now at 3000 pairs).
Although this will limit the overall compile-time impact, this may not be the best
way to achieve this result. It might be better, for example, to prune excessive
candidate pairs after the fact the prevent the generation of short, but highly-connected
groups. We can experiment with this in the future.
This change reduces the overall compile-time slowdown of the csa.ll test case in
PR15222 to ~5x. If 5x is still considered too large, a lower limit can be
used as the default.
This represents a functionality change, but only for very large inputs
(thus, there is no regression test).
llvm-svn: 175251
to have it not named appropriately. Also in StopInfoMachException, we aren't testing for software or not software, just
whether the thing is a breakpoint we set. So don't use "software"...
llvm-svn: 175241
This just adds a very simple check that if a DerivedToBase CastExpr is
operating on a value with known C++ object type, and that type is not the
base type specified in the AST, then the cast is invalid and we should
return UnknownVal.
In the future, perhaps we can have a checker that specifies that this is
illegal, but we still shouldn't assert even if the user turns that checker
off.
PR14872
llvm-svn: 175239
not matter but makes it more gcc compatible which avoids possible subtle
problems. Also, turned back on a disabled check in helloworld.ll.
llvm-svn: 175237
...after a host of optimizations related to the use of LazyCompoundVals
(our implementation of aggregate binds).
Originally applied in r173951.
Reverted in r174069 because it was causing hangs.
Re-applied in r174212.
Reverted in r174265 because it was /still/ causing hangs.
If this needs to be reverted again it will be punted to far in the future.
llvm-svn: 175234
Previously, we were scanning the current store. Now, we properly scan the
store that the LazyCompoundVal came from, which may have very different
live symbols.
llvm-svn: 175232
Previously, whenever we had a LazyCompoundVal, we crawled through the
entire store snapshot looking for bindings within the LCV's region. Now, we
just ask for the subregion bindings of the lazy region and only visit those.
This is an optimization (so no test case), but it may allow us to clean up
more dead bindings than we were previously.
llvm-svn: 175230
This is going to be used in the next commit.
While I'm here, tighten up assumptions about symbolic offset
BindingKeys, and make offset calculation explicitly handle all
MemRegion kinds.
No functionality change.
llvm-svn: 175228
The SEL data formatter was working hard to ensure that pointers-to-selectors could be formatted by the same block of code. In that effort, we were taking the address-of a SEL.
This operation fails when the SEL lives in a register, and was causing problems.
The formatter has been fixed to work correctly without assuming &selector will be a valid object.
llvm-svn: 175227
- stop ignoring the error-codes in the 'error' variable
- removed out-of-bounds accesses with read-only array fields such as:
self.assertTrue(data2.uint8[6] == 0, 'binary 0 terminator')
Since SBData wraps a (6-character) python string literal, trying to read the
null-terminator raises an exception. Instead, I replaced the out-of-bounds
read with a length-check.
Other out-of-bounds reads (via accessor function like SBData.GetUnsignedInt8)
don't throw and are OK. I just added asserts that errors are set for these
negative cases.
llvm-svn: 175223