This change fixes 2 issues in the fast unwinder from r217079:
* A crash if a frame pointer points below current stack head, but
inside the current thread stack limits. That memory may be
unmapped. A check for this was lost in r217079.
* The last valid stack frame (the first one with an invalid next
frame pointer) is always interpreted as a GCC layout frame. This
results in garbled last PC in the (expected) case when the last
frame has LLVM layout.
llvm-svn: 219683
Updated the URL to reflect information on the problem as well as build the
case for ARM. This seems to be a wider problem, not ARM or PPC specific.
llvm-svn: 219680
Allows to specify the unwinder to use for CHECK failures. Previous behaviour
was to use the "fatal" unwinder.
As compiler-rt is built without frame pointers, only the slow unwinder
really makes sense here, and it is the default.
llvm-svn: 219677
The current handling (manual execution of atexit callbacks)
is overly complex and leads to constant problems due to mutual ordering of callbacks.
Instead simply wrap callbacks into our wrapper to establish
the necessary synchronization.
Fixes issue https://code.google.com/p/thread-sanitizer/issues/detail?id=80
llvm-svn: 219675
and TargetRegisterInfo in the peephole optimizer. This
makes it easier to grab subtarget dependent variables off
of the MachineFunction rather than the TargetMachine.
llvm-svn: 219669
e.g Currently we'll generate following instructions if the immediate is too wide:
MOV X0, WideImmediate
ADD X1, BaseReg, X0
LDR X2, [X1, 0]
Using [Base+XReg] addressing mode can save one ADD as following:
MOV X0, WideImmediate
LDR X2, [BaseReg, X0]
Differential Revision: http://reviews.llvm.org/D5477
llvm-svn: 219665
I'd mispelled "Bitcode/BitCodes.h" before, and tested on a case
insensitive filesystem.
This reverts commit r219649, effectively re-applying r219647 and
r219648.
llvm-svn: 219664
based build since the subdirectories all appear to
have no inter-directory dependencies. This speeds
up parallel makefile builds greatly.
llvm-svn: 219660
In cases of nested module builds, or when you care how long module builds take,
this information was not previously easily available / obvious.
llvm-svn: 219658
This is the same optimization of r219233 with modifications to support PHIs with multiple incoming edges from the same block
and a test to check that this condition is handled.
llvm-svn: 219656
Arm code has two instruction encodings "thumb" and "arm". When branching from
one code encoding to another, you need to use an instruction that switches
the instruction mode. Usually the transition only happens at call sites, and
the linker can transform a BL instruction in BLX (or vice versa). But if the
compiler did a tail call optimization and a function ends with a branch (not
branch and link), there is no pc-rel BX instruction.
The ShimPass looks for pc-rel B instructions that will need to switch mode.
For those cases it synthesizes a shim which does the transition, then modifies
the original atom with the B instruction to target to the shim atom.
llvm-svn: 219655
after all the commands have been executed except if one of the commands was an execution control
command that stopped because of a signal or exception.
Also adds a variant of SBCommandInterpreter::HandleCommand that takes an SBExecutionContext. That
way you can run an lldb command targeted at a particular target, thread or process w/o having to
select same before running the command.
Also exposes CommandInterpreter::HandleCommandsFromFile to the SBCommandInterpreter API, since that
seemed generally useful.
llvm-svn: 219654
The bots can't seem to find an include file. Reverting for now and
I'll look into it in a bit.
This reverts commits r219647 and r219648.
llvm-svn: 219649
We currently read serialized diagnostics directly in the C API, which
makes it difficult to reuse this logic elsewhere. This extracts the
core of the serialized diagnostic parsing logic into a base class that
can be subclassed using a visitor pattern.
llvm-svn: 219647
Rather than define our own standards, we adopt a set of best practices that
are already in use by the Go community.
Differential Revision: http://reviews.llvm.org/D5761
llvm-svn: 219646
the IR going into it and to clean up the IR produced by the vectorizers.
Note that these are *off by default* right now while folks collect data
on whether the performance tradeoff is reasonable.
In a build of the 'opt' binary, I see about 2% compile time regression
due to this change on average. This is in my mind essentially the worst
expected case: very little of the opt binary is going to *benefit* from
these extra passes.
I've seen several benchmarks improve in performance my small amounts due
to running these passes, and there are certain (rare) cases where these
passes make a huge difference by either enabling the vectorizer at all
or by hoisting runtime checks out of the outer loop. My primary
motivation is to prevent people from seeing runtime check overhead in
benchmarks where the existing passes and optimizers would be able to
eliminate that.
I've chosen the sequence of passes based on the kinds of things that
seem likely to be relevant for the code at each stage: rotaing loops for
the vectorizer, finding correlated values, loop invariants, and
unswitching opportunities from any runtime checks, and cleaning up
commonalities exposed by the SLP vectorizer.
I'll be pinging existing threads where some of these issues have come up
and will start new threads to get folks to benchmark and collect data on
whether this is the right tradeoff or we should do something else.
llvm-svn: 219644
This change adds UBSan check to upcasts. Namely, when we
perform derived-to-base conversion, we:
1) check that the pointer-to-derived has suitable alignment
and underlying storage, if this pointer is non-null.
2) if vptr-sanitizer is enabled, and we perform conversion to
virtual base, we check that pointer-to-derived has a matching vptr.
llvm-svn: 219642
This goes with the earlier commit to remove the static destructor from ManagedStatic.cpp by controlling the allocation and de-allocation of the mutex.
Summary: This is part of the ongoing work to remove static constructors and destructors.
Reviewers: chandlerc, rnk
Reviewed By: rnk
Subscribers: rnk, llvm-commits
Differential Revision: http://reviews.llvm.org/D5473
llvm-svn: 219640
We assumed that negation operations of the form (0 - %Z) resulted in a
negative number. This isn't true if %Z was originally negative.
Substituting the negative number into the remainder operation may result
in undefined behavior because the dividend might be INT_MIN.
This fixes PR21256.
llvm-svn: 219639
This patch adds a new llvm_call_once function which is used by the ManagedStatic implementation to safely initialize a global to avoid static construction and destruction.
llvm-svn: 219638
We have a transform that changes:
(x lshr C1) udiv C2
into:
x udiv (C2 << C1)
However, it is unsafe to do so if C2 << C1 discards any of C2's bits.
This fixes PR21255.
llvm-svn: 219634
Summary:
Make Mips fast-isel track the form of AArch64 where practical.
This makes it easier for people to review the code, to borrow similar code, and to see how to eventually move a lot of this
target code for fast-isels into target independent code.
These are just cosmetic changes. Should be no functional difference.
Test Plan:
make check
test-suite for 4 flavors mips32 r1/r2 , -O0/-O2
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: aemerson, llvm-commits, rfuhler
Differential Revision: http://reviews.llvm.org/D5595
llvm-svn: 219633