When using system C++ library, assume we have a working C++ compiler and
try to compile a complete C++ program. When using in tree C++ library,
only check the C compiler since the C++ library likely won't have been
built yet at time of running the check.
Differential Revision: https://reviews.llvm.org/D47169
llvm-svn: 332924
We've seen some cases on macOS where you go to instruction single
step (over a breakpoint), and single step returns but the instruction
hasn't been executed (and the pc hasn't moved.) The ThreadPlanStepOverBreakpoint
used to handle this case by accident, but the patches to handle two adjacent
breakpoints broke that accident.
This patch fixes the logic of ExplainsStop to explicitly handle the case where
the pc didn't move. It also adds a WillPop that re-enables the breakpoint we
were stepping over. We never want an unexpected path through the plan to
fool us into not doing that.
I have no idea how to make this bug happen. It is very inconsistent when it
occurs IRL. We really need a full MockProcess Plugin before we can start to write
tests for this sort of system hiccup.
<rdar://problem/38505726>
llvm-svn: 332922
This is the FP sibling of D43141 with the corresponding IR change in rL327212.
We can't propagate undef here because if a variable operand is a NaN, these
binops must propagate NaN. Neither global nor node-level fast-math makes a
difference. If we have 'nnan', I think later folds can turn the NaN into undef.
The tests in X86/fp-undef.ll are meant to be the definitive verification for
these folds - everything reduces identically now.
The other test changes are collateral damage. They may need to be altered to
preserve their intent.
Differential Revision: https://reviews.llvm.org/D47026
llvm-svn: 332920
Apparently the compile time problem was caused by the fact that not
all compilers / STL implementations can automatically convert
std::unique_ptr<Derived> to std::unique_ptr<Base>. Fixed (hopefully)
by making sure it's std::unique_ptr<Derived>&& (rvalue ref) to
std::unique_ptr<Base> conversion instead.
llvm-svn: 332917
Summary: We use uint8_t in this header, so we need to include cstdint.
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D47164
llvm-svn: 332913
r332654 tried to fix an unused function warning with
a void cast. This approach worked for clang and gcc
but not for MSVC. This commit replaces the void cast
with the LLVM_ATTRIBUTE_USED approach.
llvm-svn: 332910
This patch starts a series of patches that decrease time spent by
GlobalISel in its InstructionSelect pass by roughly 60% for -O0 builds
for large inputs as measured on sqlite3-amalgamation
(http://sqlite.org/download.html) targeting AArch64.
The performance improvements are achieved solely by reducing the
number of matching GIM_* opcodes executed by the MatchTable's
interpreter during the selection by approx. a factor of 30, which also
brings contribution of this particular part of the selection process
to the overall runtime of InstructionSelect pass down from approx.
60-70% to 5-7%, thus making further improvements in this particular
direction not very profitable.
The improvements described above are expected for any target that
doesn't have many complex patterns. The targets that do should
strictly benefit from the changes, but by how much exactly is hard to
estimate beforehand. It's also likely that such target WILL benefit
from further improvements to MatchTable, most likely the ones that
bring it closer to a perfect decision tree.
This commit specifically is rather large mostly NFC commit that does
necessary preparation work and refactoring, there will be a following
series of small patches introducing a specific optimization each
shortly after.
This commit specifically is expected to cause a small compile time
regression (around 2.5% of InstructionSelect pass time), which should
be fixed by the next commit of the series.
Every commit planned shares the same Phabricator Review.
Reviewers: qcolombet, dsanders, bogner, aemerson, javed.absar
Reviewed By: qcolombet
Subscribers: rovka, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D44700
llvm-svn: 332907
Summary:
As pointed out in D46528, we errneously transform cases like `xor X, -1`,
even though we use said function.
It's because the `-1` is actually a bitcast there.
So i think we can just look through it in the function.
Differential Revision: https://reviews.llvm.org/D47156
llvm-svn: 332905
Summary:
This **appears** to be the last missing piece for the masked merge pattern handling in the backend.
This is [[ https://bugs.llvm.org/show_bug.cgi?id=37104 | PR37104 ]].
[[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]] will introduce an IR canonicalization that is likely bad for the end assembly.
Previously, `andps`+`andnps` / `bsl` would be generated. (see `@out`)
Now, they would no longer be generated (see `@in`), and we need to make sure that they are generated.
Differential Revision: https://reviews.llvm.org/D46528
llvm-svn: 332904
Also tightens the behavior of ExecutionSession::failQuery. Queries can usually
only be failed by marking a symbol as failed-to-materialize, but
ExecutionSession::failQuery provides a second route, and both routes may be
executed from different threads. In the case that a query has already been
failed due to a materialization error, ExecutionSession::failQuery will
direct the error to ExecutionSession::reportError instead.
llvm-svn: 332898
The lookup function provides blocking symbol resolution for JIT clients (not
layers themselves) so it does not need to track symbol dependencies via a
MaterializationResponsibility.
llvm-svn: 332897
SimplifyDemandedBits can remove bits from the masks for the shift amounts we need to see to detect rotates.
This patch uses zeroes from computeKnownBits to fill in some of these mask bits to make the match work.
As currently written this calls computeKnownBits even when the mask hasn't been simplified because it made the code simpler. If we're worried about compile time performance we can improve this.
I know we're talking about making a rotate intrinsic, but hopefully we can go ahead and do this change and just make sure the rotate intrinsic also handles it.
Differential Revision: https://reviews.llvm.org/D47116
llvm-svn: 332895
This code should really do exactly the same thing for 32-bit x86 and
64-bit small code models, with the exception that RIP-relative
addressing can't use base and index registers.
llvm-svn: 332893
Because the intrinsics in the headers are implemented as macros, we can't just use a select builtin and pternlog builtin. This would require one of the macro arguments to be used twice. Depending on what was passed to the macro we could expand an expression twice leading to weird behavior. We could maybe declare our local variable in the macro, but that would need to worry about name collisions.
To avoid that just generate IR directly in CGBuiltin.cpp.
Differential Revision: https://reviews.llvm.org/D47125
llvm-svn: 332891
This removes 6 intrinsics since we no longer need separate mask and maskz intrinsics.
Differential Revision: https://reviews.llvm.org/D47124
llvm-svn: 332890
On RTEMS, system and user code all live in a single binary and address
space. There is no clean separation, and instrumented code may
execute before the ASan run-time is initialized (or after it has been
destroyed).
Currently, GetCurrentThread() may crash if it's called before ASan
run-time is initialized. Make it return nullptr instead.
Similarly, fix __asan_handle_no_return so that it gives up rather than
try something that may crash.
Differential Revision: https://reviews.llvm.org/D46459
llvm-svn: 332888
In all cases, we're pulling the cast above the select.
That's not a good canonicalization if we're creating
a select that then mismatches the operand size of its
condition.
llvm-svn: 332883
I believe this is safe assuming default default FP environment. The conversion might be inexact, but it can never overflow the FP type so this shouldn't be undefined behavior for the uitofp/sitofp instructions.
We already do something similar for scalar conversions.
Differential Revision: https://reviews.llvm.org/D46863
llvm-svn: 332882
Summary:
The correct spelling is "DWARF", the debugging format, not "DWARG".
The typo is in a (not executed by lit) comment in a test file, so
fixing it does not result in any functional change.
Test Plan: check-llvm, just in case
llvm-svn: 332878
Rather than relying on the user to do the address calculating in
DW_AT_location we should just dump the absolute address.
rdar://problem/38513870
Differential revision: https://reviews.llvm.org/D47152
llvm-svn: 332873