This optimisation was crashing when there was a chain of more than one bitcast
instruction to replace, as a result of the changes in D27283.
Patch by James Price.
Differential Revision: https://reviews.llvm.org/D30347
llvm-svn: 296163
r289428 added a separate language kind for Objective-C, but kept many
"Language == LK_Cpp" checks untouched. This introduced a "IsCpp()"
method that returns true for both C++ and Objective-C++, and replaces
all comparisons of Language with LK_Cpp with calls to this new method.
Also add a lot more test coverge for formatting things in LK_ObjC mode,
by having FormatTest's verifyFormat() test for LK_ObjC everything that's
being tested for LK_Cpp at the moment.
Fixes PR32060 and many other things.
llvm-svn: 296160
After a series of patches on the LLVM side to get the mmaping
code up to compatibility with LLDB's needs, it is now ready
to go, which means LLDB's custom mmapping code is redundant.
So this patch deletes it all and uses LLVM's code instead.
In the future, we could take this one step further and delete
even the lldb DataBuffer base class and rely entirely on
LLVM's facilities, but this is a job for another day.
Differential Revision: https://reviews.llvm.org/D30054
llvm-svn: 296159
Splitting critical edges when one of the source edges is an indirectbr
is hard in general (because it requires changing the memory the indirectbr
reads). But if a block only has a single indirectbr predecessor (which is
the common case), we can simulate splitting that edge by splitting
the destination block, and retargeting the *direct* branches.
This is motivated by the use of computed gotos in python 2.7: PyEval_EvalFrame()
ends up using an indirect branch with ~100 successors, and passing a constant to
each of those. Since MachineSink can't break indirect critical edges on demand
(and doing this in MIR doesn't look feasible), this causes us to emit about ~100
defs of registers containing constants, which we in the predecessor block, where
only one of those constants is used in each successor. So, at each computed goto,
we needlessly spill about a 100 constants to stack. The end result is that a
clang-compiled python interpreter can be about ~2.5x slower on a simple python
reduction loop than a gcc-compiled interpreter.
Differential Revision: https://reviews.llvm.org/D29916
llvm-svn: 296149
The current pattern for extract bits in range is typically:
Mask.lshr(BitOffset).trunc(SubSizeInBits);
Which can be particularly slow for large APInts (MaskSizeInBits > 64) as they require the allocation of memory for the temporary variable.
This is another of the compile time issues identified in PR32037 (see also D30265).
This patch adds the APInt::extractBits() helper method which avoids the temporary memory allocation.
Differential Revision: https://reviews.llvm.org/D30336
llvm-svn: 296147
Made a mistake in the condition typo because LIBCXXABI_BAREMETAL is always
defined, I should have been checking the contents to see if it's enabled.
Differential Revision: https://reviews.llvm.org/D30343
llvm-svn: 296146
This patch merges the existing floating-point induction variable widening code
into the integer induction variable widening code, creating a single set of
functions for both kinds of inductions. The primary motivation for doing this
is to enable vector phi node creation for floating-point induction variables.
Differential Revision: https://reviews.llvm.org/D30211
llvm-svn: 296145
Provide a 64-bit pattern to use SUBFIC for subtracting from a 16-bit immediate.
The corresponding pattern already exists for 32-bit integers.
Committing on behalf of Hiroshi Inoue.
Differential Revision: https://reviews.llvm.org/D29387
llvm-svn: 296144
Emit clrrdi (extended mnemonic for rldicr) for AND-ing with masks that
clear bits from the right hand size.
Committing on behalf of Hiroshi Inoue.
Differential Revision: https://reviews.llvm.org/D29388
llvm-svn: 296143
The current pattern for extract bits in range is typically:
Mask.lshr(BitOffset).trunc(SubSizeInBits);
Which can be particularly slow for large APInts (MaskSizeInBits > 64) as they require the allocation of memory for the temporary variable.
This is another of the compile time issues identified in PR32037 (see also D30265).
This patch adds the APInt::extractBits() helper method which avoids the temporary memory allocation.
Differential Revision: https://reviews.llvm.org/D30336
llvm-svn: 296141
in macro argument pre-expansion mode when skipping a function body
This commit fixes a token caching problem that currently occurs when clang is
skipping a function body (e.g. when looking for a code completion token) and at
the same time caching the tokens for _Pragma when lexing it in macro argument
pre-expansion mode.
When _Pragma is being lexed in macro argument pre-expansion mode, it caches the
tokens so that it can avoid interpreting the pragma immediately (as the macro
argument may not be used in the macro body), and then either backtracks over or
commits these tokens. The problem is that, when we're backtracking/committing in
such a scenario, there's already a previous backtracking position stored in
BacktrackPositions (as we're skipping the function body), and this leads to a
situation where the cached tokens from the pragma (like '(' 'string_literal'
and ')') will remain in the cached tokens array incorrectly even after they're
consumed (in the case of backtracking) or just ignored (in the case when they're
committed). Furthermore, what makes it even worse, is that because of a previous
backtracking position, the logic that deals with when should we call
ExitCachingLexMode in CachingLex no longer works for us in this situation, and
more tokens in the macro argument get cached, to the point where the EOF token
that corresponds to the macro argument EOF is cached. This problem leads to all
sorts of issues in code completion mode, where incorrect errors get presented
and code completion completely fails to produce completion results.
rdar://28523863
Differential Revision: https://reviews.llvm.org/D28772
llvm-svn: 296140
Extra const in the StringRef argument meant that MSVC complained about it not correctly overriding from OperandPredicateMatcher::emitCxxPredicateExpr (which didn't have the const)
llvm-svn: 296138
The motivation for filling out these select-of-constants cases goes back to D24480,
where we discussed removing an IR fold from add(zext) --> select. And that goes back to:
https://reviews.llvm.org/rL75531https://reviews.llvm.org/rL159230
The idea is that we should always canonicalize patterns like this to a select-of-constants
in IR because that's the smallest IR and the best for value tracking. Note that we currently
do the opposite in some cases (like the cases in *this* patch). Ie, the proposed folds in
this patch already exist in InstCombine today:
https://github.com/llvm-mirror/llvm/blob/master/lib/Transforms/InstCombine/InstCombineSelect.cpp#L1151
As this patch shows, most targets generate better machine code for simple ext/add/not ops
rather than a select of constants. So the follow-up steps to make this less of a patchwork
of special-case folds and missing IR canonicalization:
1. Have DAGCombiner convert any select of constants into ext/add/not ops.
2 Have InstCombine canonicalize in the other direction (create more selects).
Differential Revision: https://reviews.llvm.org/D30180
llvm-svn: 296137
We've been having issues with using libcxxabi and libunwind for baremetal
targets because fprintf is dependent on io functions, this patch disables calls
to fprintf when building for baremetal in release mode.
Differential Revision: https://reviews.llvm.org/D30339
llvm-svn: 296136
We've been having issues with using libcxxabi and libunwind for baremetal
targets because fprintf is dependent on io functions, this patch disables calls
to fprintf when building for baremetal in release mode.
Differential Revision: https://reviews.llvm.org/D30340
llvm-svn: 296135
This time with the missing files.
Similar to PR/25526, fast-regalloc introduces spills at the end of basic
blocks. When this occurs in between an ll and sc, the store can cause the
atomic sequence to fail.
This patch fixes the issue by introducing more pseudos to represent atomic
operations and moving their lowering to after the expansion of postRA
pseudos.
This resolves PR/32020.
Thanks to James Cowgill for reporting the issue!
Reviewers: slthakur
Differential Revision: https://reviews.llvm.org/D30257
llvm-svn: 296134
Similar to PR/25526, fast-regalloc introduces spills at the end of basic
blocks. When this occurs in between an ll and sc, the store can cause the
atomic sequence to fail.
This patch fixes the issue by introducing more pseudos to represent atomic
operations and moving their lowering to after the expansion of postRA
pseudos.
This resolves PR/32020.
Thanks to James Cowgill for reporting the issue!
Reviewers: slthakur
Differential Revision: https://reviews.llvm.org/D30257
llvm-svn: 296132
Summary:
This isn't testable for AArch64 by itself so this patch also adds
support for constant immediates in the pattern and physical
register uses in the result.
The new IntOperandMatcher matches the constant in patterns such as
'(set $rd:GPR32, (G_XOR $rs:GPR32, -1))'. It's always safe to fold
immediates into an instruction so this is the first rule that will match
across multiple BB's.
The Renderer hierarchy is responsible for adding operands to the result
instruction. Renderers can copy operands (CopyRenderer) or add physical
registers (in particular %wzr and %xzr) to the result instruction
in any order (OperandMatchers now import the operand names from
SelectionDAG to allow renderers to access any operand). This allows us to
emit the result instruction for:
%1 = G_XOR %0, -1 --> %1 = ORNWrr %wzr, %0
%1 = G_XOR -1, %0 --> %1 = ORNWrr %wzr, %0
although the latter is untested since the matcher/importer has not been
taught about commutativity yet.
Added BuildMIAction which can build new instructions and mutate them where
possible. W.r.t the mutation aspect, MatchActions are now told the name of
an instruction they can recycle and BuildMIAction will emit mutation code
when the renderers are appropriate. They are appropriate when all operands
are rendered using CopyRenderer and the indices are the same as the matcher.
This currently assumes that all operands have at least one matcher.
Finally, this change also fixes a crash in
AArch64InstructionSelector::select() caused by an immediate operand
passing isImm() rather than isCImm(). This was uncovered by the other
changes and was detected by existing tests.
Depends on D29711
Reviewers: t.p.northover, ab, qcolombet, rovka, aditya_nandakumar, javed.absar
Reviewed By: rovka
Subscribers: aemerson, dberris, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D29712
llvm-svn: 296131
Noticed while profiling PR32037, the target shuffle ops were being stored in SmallVector<*,8> types but the combiner could store as many as 16 ops at maximum depth (2 per depth).
llvm-svn: 296130
This one seems more obvious than D30270 that it can't make improvements because an extension always needs
all of the incoming bits. There's one specific transform in SimplifyDemandedInstructionBits of converting
a sext to a zext when the sign-bit is known zero, but that is handled explicitly in visitSext() with
ComputeSignBit().
Like D30270, there are no IR differences (other than instruction names) for the case in PR32037:
https://bugs.llvm.org//show_bug.cgi?id=32037
...and no regression test differences.
Zext/sext are a smaller part of the profile, but this still appears to shave off another 0.5% or so from
'opt -O2'.
Differential Revision: https://reviews.llvm.org/D30280
llvm-svn: 296129
The 'Kind' member used in RTTI for InstructionPredicateMatcher was not
initialized but went undetected since I always ended up with the correct value.
llvm-svn: 296126
Previously LLVM was assuming 32-bit signed immediates which results in and with
a bitmask that has bit 31 set to incorrectly include bits 63-32 in the result.
After applying this patch I can now compile all of the FreeBSD mips assembly
code with clang.
This issue also affects the nor, slt and sltu macros and I will fix those in a
separate review.
Patch By: Alexander Richardson
Commit message reformatted by sdardis.
Reviewers: atanasyan, theraven, sdardis
Differential Revision: https://reviews.llvm.org/D30298
llvm-svn: 296125
Summary:
This makes more important rules have priority over less important rules.
For example, '%a = G_ADD $b:s64, $c:s64' has priority over
'%a = G_ADD $b:s32, $c:s32'. Previously these rules were emitted in the
correct order by chance.
NFC in this patch but it is required to make the next patch work correctly.
Depends on D29710
Reviewers: t.p.northover, ab, qcolombet, aditya_nandakumar, rovka
Reviewed By: ab, rovka
Subscribers: javed.absar, dberris, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D29711
llvm-svn: 296121