Current internal option -static-func-full-module-prefix keeps all the
directory path the profile counter names for static functions. The default
of this option is false. This strips the directory names from the source
filename which is problematic:
(1) it creates linker errors for profile-generation compilation, exposed in
our internal benchmarks. We are seeing messages like
"warning: relocation refers to discarded section".
This is due to the name conflicts after the stripping.
(2) the stripping only applies to getPGOFuncName.
Current Thin-LTO module importing for the indirect-calls assumes
the source directory name not being stripped. Current default value
for this option can potentially prevent some inter-module
indirect-call-promotions.
This patch turns the default value for -static-func-full-module-prefix to true.
The second part of the patch is to have an alternative implementation under
the internal option -static-func-strip-dirname-prefix=<value>
This options specifies level of directories to be stripped from the source
filename. Using a large value as the parameter has the same effect as
-static-func-full-module-prefix.
Differential Revision: http://reviews.llvm.org/D29512
llvm-svn: 296206
With the "wasm32-unknown-unknown-wasm" triple, this allows writing out
simple wasm object files, and is another step in a larger series toward
migrating from ELF to general wasm object support. Note that this code
and the binary format itself is still experimental.
llvm-svn: 296190
Summary:
A number of tools and common workflows include putting a build directory inside the source checkout under the folder "build". Adding this to .gitignore seems useful.
As an example, the CMake Tools plugin for VSCode does this.
Reviewers: chandlerc, echristo, zturner
Reviewed By: zturner
Subscribers: MatzeB, mehdi_amini, llvm-commits, jgosnell
Differential Revision: https://reviews.llvm.org/D30346
llvm-svn: 296188
When using -rtlib=libgcc, the fallback implementation of __atomic_*
builtins is provided via libatomic (included in GCC). However, neither
GCC itself nor clang link libatomic implicitly, and it seems that GCC
upstream expects projects to link it explicitly as necessary.
Since compiler-rt provides __atomic_* builtins directly in the main
library, check if they are provided by the default libraries first.
If they are not, check if -latomic is available to provide them
and add explicit -latomic for tests in this case.
This fixes unresolved __atomic_load() references when running openmp
tests on i386 with libgcc backend.
Differential Revision: https://reviews.llvm.org/D30083
llvm-svn: 296183
This reverts commit r296009. It broke one out of tree target and also
does not account for all partial lines added or removed when calculating
PressureDiff.
llvm-svn: 296182
If there's some reason not to do this, feel free to revert and/or fix, but
for the cases I'm looking at, the script appears to do fine for these targets.
llvm-svn: 296181
All G_CONSTANTS created by the MachineIRBuilder have an operand of type CImm
(i.e. a ConstantInt), so that's what the selector needs to look for.
llvm-svn: 296176
Looks like %T isn't per-test but per-test-directory, and
the rm was deleting temp files written by other tests in
test/Format. Limit the rm's scope a bit.
llvm-svn: 296171
When we construct addressing modes, we use isNoopAddrSpaceCast to ignore
addrspacecast instructions. Make sure we insert the correct addrspacecast
when we reconstruct the addressing mode.
Differential Revision: https://reviews.llvm.org/D30114
llvm-svn: 296167
This optimisation was crashing when there was a chain of more than one bitcast
instruction to replace, as a result of the changes in D27283.
Patch by James Price.
Differential Revision: https://reviews.llvm.org/D30347
llvm-svn: 296163
r289428 added a separate language kind for Objective-C, but kept many
"Language == LK_Cpp" checks untouched. This introduced a "IsCpp()"
method that returns true for both C++ and Objective-C++, and replaces
all comparisons of Language with LK_Cpp with calls to this new method.
Also add a lot more test coverge for formatting things in LK_ObjC mode,
by having FormatTest's verifyFormat() test for LK_ObjC everything that's
being tested for LK_Cpp at the moment.
Fixes PR32060 and many other things.
llvm-svn: 296160
After a series of patches on the LLVM side to get the mmaping
code up to compatibility with LLDB's needs, it is now ready
to go, which means LLDB's custom mmapping code is redundant.
So this patch deletes it all and uses LLVM's code instead.
In the future, we could take this one step further and delete
even the lldb DataBuffer base class and rely entirely on
LLVM's facilities, but this is a job for another day.
Differential Revision: https://reviews.llvm.org/D30054
llvm-svn: 296159
Splitting critical edges when one of the source edges is an indirectbr
is hard in general (because it requires changing the memory the indirectbr
reads). But if a block only has a single indirectbr predecessor (which is
the common case), we can simulate splitting that edge by splitting
the destination block, and retargeting the *direct* branches.
This is motivated by the use of computed gotos in python 2.7: PyEval_EvalFrame()
ends up using an indirect branch with ~100 successors, and passing a constant to
each of those. Since MachineSink can't break indirect critical edges on demand
(and doing this in MIR doesn't look feasible), this causes us to emit about ~100
defs of registers containing constants, which we in the predecessor block, where
only one of those constants is used in each successor. So, at each computed goto,
we needlessly spill about a 100 constants to stack. The end result is that a
clang-compiled python interpreter can be about ~2.5x slower on a simple python
reduction loop than a gcc-compiled interpreter.
Differential Revision: https://reviews.llvm.org/D29916
llvm-svn: 296149
The current pattern for extract bits in range is typically:
Mask.lshr(BitOffset).trunc(SubSizeInBits);
Which can be particularly slow for large APInts (MaskSizeInBits > 64) as they require the allocation of memory for the temporary variable.
This is another of the compile time issues identified in PR32037 (see also D30265).
This patch adds the APInt::extractBits() helper method which avoids the temporary memory allocation.
Differential Revision: https://reviews.llvm.org/D30336
llvm-svn: 296147
Made a mistake in the condition typo because LIBCXXABI_BAREMETAL is always
defined, I should have been checking the contents to see if it's enabled.
Differential Revision: https://reviews.llvm.org/D30343
llvm-svn: 296146
This patch merges the existing floating-point induction variable widening code
into the integer induction variable widening code, creating a single set of
functions for both kinds of inductions. The primary motivation for doing this
is to enable vector phi node creation for floating-point induction variables.
Differential Revision: https://reviews.llvm.org/D30211
llvm-svn: 296145
Provide a 64-bit pattern to use SUBFIC for subtracting from a 16-bit immediate.
The corresponding pattern already exists for 32-bit integers.
Committing on behalf of Hiroshi Inoue.
Differential Revision: https://reviews.llvm.org/D29387
llvm-svn: 296144
Emit clrrdi (extended mnemonic for rldicr) for AND-ing with masks that
clear bits from the right hand size.
Committing on behalf of Hiroshi Inoue.
Differential Revision: https://reviews.llvm.org/D29388
llvm-svn: 296143
The current pattern for extract bits in range is typically:
Mask.lshr(BitOffset).trunc(SubSizeInBits);
Which can be particularly slow for large APInts (MaskSizeInBits > 64) as they require the allocation of memory for the temporary variable.
This is another of the compile time issues identified in PR32037 (see also D30265).
This patch adds the APInt::extractBits() helper method which avoids the temporary memory allocation.
Differential Revision: https://reviews.llvm.org/D30336
llvm-svn: 296141
in macro argument pre-expansion mode when skipping a function body
This commit fixes a token caching problem that currently occurs when clang is
skipping a function body (e.g. when looking for a code completion token) and at
the same time caching the tokens for _Pragma when lexing it in macro argument
pre-expansion mode.
When _Pragma is being lexed in macro argument pre-expansion mode, it caches the
tokens so that it can avoid interpreting the pragma immediately (as the macro
argument may not be used in the macro body), and then either backtracks over or
commits these tokens. The problem is that, when we're backtracking/committing in
such a scenario, there's already a previous backtracking position stored in
BacktrackPositions (as we're skipping the function body), and this leads to a
situation where the cached tokens from the pragma (like '(' 'string_literal'
and ')') will remain in the cached tokens array incorrectly even after they're
consumed (in the case of backtracking) or just ignored (in the case when they're
committed). Furthermore, what makes it even worse, is that because of a previous
backtracking position, the logic that deals with when should we call
ExitCachingLexMode in CachingLex no longer works for us in this situation, and
more tokens in the macro argument get cached, to the point where the EOF token
that corresponds to the macro argument EOF is cached. This problem leads to all
sorts of issues in code completion mode, where incorrect errors get presented
and code completion completely fails to produce completion results.
rdar://28523863
Differential Revision: https://reviews.llvm.org/D28772
llvm-svn: 296140
Extra const in the StringRef argument meant that MSVC complained about it not correctly overriding from OperandPredicateMatcher::emitCxxPredicateExpr (which didn't have the const)
llvm-svn: 296138
The motivation for filling out these select-of-constants cases goes back to D24480,
where we discussed removing an IR fold from add(zext) --> select. And that goes back to:
https://reviews.llvm.org/rL75531https://reviews.llvm.org/rL159230
The idea is that we should always canonicalize patterns like this to a select-of-constants
in IR because that's the smallest IR and the best for value tracking. Note that we currently
do the opposite in some cases (like the cases in *this* patch). Ie, the proposed folds in
this patch already exist in InstCombine today:
https://github.com/llvm-mirror/llvm/blob/master/lib/Transforms/InstCombine/InstCombineSelect.cpp#L1151
As this patch shows, most targets generate better machine code for simple ext/add/not ops
rather than a select of constants. So the follow-up steps to make this less of a patchwork
of special-case folds and missing IR canonicalization:
1. Have DAGCombiner convert any select of constants into ext/add/not ops.
2 Have InstCombine canonicalize in the other direction (create more selects).
Differential Revision: https://reviews.llvm.org/D30180
llvm-svn: 296137
We've been having issues with using libcxxabi and libunwind for baremetal
targets because fprintf is dependent on io functions, this patch disables calls
to fprintf when building for baremetal in release mode.
Differential Revision: https://reviews.llvm.org/D30339
llvm-svn: 296136
We've been having issues with using libcxxabi and libunwind for baremetal
targets because fprintf is dependent on io functions, this patch disables calls
to fprintf when building for baremetal in release mode.
Differential Revision: https://reviews.llvm.org/D30340
llvm-svn: 296135
This time with the missing files.
Similar to PR/25526, fast-regalloc introduces spills at the end of basic
blocks. When this occurs in between an ll and sc, the store can cause the
atomic sequence to fail.
This patch fixes the issue by introducing more pseudos to represent atomic
operations and moving their lowering to after the expansion of postRA
pseudos.
This resolves PR/32020.
Thanks to James Cowgill for reporting the issue!
Reviewers: slthakur
Differential Revision: https://reviews.llvm.org/D30257
llvm-svn: 296134