Currently memcpyopt optimizes cases like
memset(a, byte, N);
memcpy(b, a, M);
to
memset(a, byte, N);
memset(b, byte, M);
if M <= N. Often this allows further simplifications down the line,
which drop the first memset entirely.
This patch extends this optimization for the case where M > N, but we
know that the bytes a[N..M] are undef due to alloca/lifetime.start.
This situation arises relatively often for Rust code, because Rust does
not initialize trailing structure padding and loves to insert redundant
memcpys. This also fixes https://bugs.llvm.org/show_bug.cgi?id=39844.
For the implementation, I'm reusing a bit of code for a similar existing
optimization (direct memcpy of undef). I've also added memset support to
MemDepAnalysis GetLocation -- Instead, getPointerDependencyFrom could be
used, but it seems to make more sense to add this to GetLocation and thus
make the computation cachable.
Differential Revision: https://reviews.llvm.org/D55120
llvm-svn: 348645
Change in r337953 violated the contract for `CXTranslationUnit_KeepGoing`:
> Do not stop processing when fatal errors are encountered.
Use different approach to fix long processing times with multiple inclusion
cycles. Instead of stopping preprocessing for fatal errors, do this after
reaching the max allowed include depth and only for the files that were
processed already. It is likely but not guaranteed those files cause a cycle.
rdar://problem/46108547
Reviewers: erik.pilkington, arphaman
Reviewed By: erik.pilkington
Subscribers: jkorous, dexonsmith, ilya-biryukov, Dmitry.Kozhevnikov
Differential Revision: https://reviews.llvm.org/D55095
llvm-svn: 348641
The splitting pass uses its 'unlikelyExecuted' predicate to statically
decide which blocks are cold.
- Do not treat noreturn calls as if they are cold unless they are actually
marked cold. This is motivated by functions like exit() and longjmp(), which
are not beneficial to outline.
- Do not treat inline asm as an outlining barrier. In practice asm("") is
frequently used to inhibit basic block merging; enabling outlining in this case
results in substantial memory savings.
- Treat invokes of cold functions as cold.
As a drive-by, remove the 'exceptionHandlingFunctions' predicate, because it's
no longer needed. The pass can identify & outline blocks dominated by EH pads,
so there's no need to special-case __cxa_begin_catch etc.
Differential Revision: https://reviews.llvm.org/D54244
llvm-svn: 348640
Algorithm: Identify maximal cold regions and put them in a worklist. If
a candidate region overlaps with another, discard it. While the worklist
is full, remove a single-entry sub-region from the worklist and attempt
to outline it. By the non-overlap property, this should not invalidate
parts of the domtree pertaining to other outlining regions.
Testing: LNT results on X86 are clean. With test-suite + externals, llvm
outlines 134KB pre-patch, and 352KB post-patch (+ ~2.6x). The file
483.xalancbmk/src/Constants.cpp stands out as an extreme case where llvm
outlines over 100 times in some functions (mostly EH paths). There was
not a significant performance impact pre vs. post-patch.
Differential Revision: https://reviews.llvm.org/D53887
llvm-svn: 348639
Allow enabling and disabling tracking of ObjC/CF objects
separately from tracking of OS objects.
Differential Revision: https://reviews.llvm.org/D55400
llvm-svn: 348638
The option has no tests, is not used anywhere, and is actually
incorrect: it prints the line number without the reference to a file,
which can be outright incorrect.
Differential Revision: https://reviews.llvm.org/D55385
llvm-svn: 348637
Patch by Alex Strelnikov.
Reviewed as D53830
Introduce a new check to upgrade user code based on upcoming API breaking changes to absl::Duration.
The check finds calls to arithmetic operators and factory functions for absl::Duration that rely on
an implicit user defined conversion to int64_t. These cases will no longer compile after proposed
changes are released. Suggested fixes explicitly cast the argument int64_t.
llvm-svn: 348633
Previously we would create an lldb::Function object for each function
parsed, but we would not add these to the clang AST. This is a first
step towards getting local variable support working, as we first need an
AST decl so that when we create local variable entries, they have the
proper DeclContext.
Differential Revision: https://reviews.llvm.org/D55384
llvm-svn: 348631
Unlike GNU tar and libarchive bsdtar, NetBSD 'tar -t' output does not
use C-style escapes and instead outputs paths literally. Fix the test
to account both for escaped and literal backslash output.
Differential Revision: https://reviews.llvm.org/D55441
llvm-svn: 348628
We were overcounting the number of arithmetic operations needed at each level before we reach a legal type. We were using the full vector type for that level, but we are going to split the input vector at that level in half. So the effective arithmetic operation cost at that level is half the width.
So for example on 8i32 on an sse target. Were were calculating the cost of an 8i32 op which is likely 2 for basic integer. Then after the loop we count 2 more v4i32 ops. For a total arith cost of 4. But if you look at the assembly there would only be 3 arithmetic ops.
There are still more bugs in this code that I'm going to work on next. The non pairwise code shouldn't count extract subvectors in the loop. There are no extracts, the types are split in registers. For pairwise we need to use 2 two src permute shuffles.
Differential Revision: https://reviews.llvm.org/D55397
llvm-svn: 348621
To make X86CondBrFoldingPass can be run with --run-pass option, this can test one wrong assertion on analyzeCompare function for SUB32ri when its operand is not imm
Patch by Jianping Chen
Differential Revision: https://reviews.llvm.org/D55412
llvm-svn: 348620
This patch attempts to improve pfm perf counter coverage for all the x86 CPUs that libpfm4 supports.
Intel/AMD CPU families tend to share names for cycle/uops counters so even if they don't have a scheduler model yet they can at least use the default values (checked against the libpfm4 source code).
The remaining CPUs (where their port/pipe resource counters are known) I've tried to add to the existing model mappings.
These are untested but don't represent a regression to current llvm-exegesis behaviour for these CPUs.
Differential Revision: https://reviews.llvm.org/D55432
llvm-svn: 348617
Summary:
We introduce a strict policy for C++ CTU. It can work across TUs only if
the C++ dialects are the same. We neither allow C vs C++ CTU. We do this
because the same constructs might be represented with different properties in
the corresponding AST nodes or even the nodes might be completely different (a
struct will be RecordDecl in C, but it will be a CXXRectordDecl in C++, thus it
may cause certain assertions during cast operations).
Reviewers: xazax.hun, a_sidorin
Subscribers: rnkovacs, dkrupp, Szelethus, gamesh411, cfe-commits
Differential Revision: https://reviews.llvm.org/D55134
llvm-svn: 348610
Summary:
Introduced special noinline function log that allows to save some
registers for optimized builds but with enabled logging. Also, it
increases the stability of the optimized builds with inlined runtime.
Reviewers: gtbercea, kkwli0
Reviewed By: gtbercea
Subscribers: caomhin, guansong, openmp-commits
Differential Revision: https://reviews.llvm.org/D55436
llvm-svn: 348606
Summary:
Adding some more CTU list tests. E.g. to check if a construct is unsupported.
We also slightly modify the handling of the return value of the `Import`
function from ASTImporter.
Reviewers: xazax.hun, balazske, a_sidorin
Subscribers: rnkovacs, dkrupp, Szelethus, gamesh411, cfe-commits
Differential Revision: https://reviews.llvm.org/D55131
llvm-svn: 348605
As discussed in the post-commit thread of r347917, this
transform is fighting with an existing transform causing
an infinite loop or out-of-memory, so this is effectively
reverting r347917 and its follow-up r348195 while we
investigate the bug.
llvm-svn: 348604
DemandedBits and BDCE currently only support scalar integers. This
patch extends them to also handle vector integer operations. In this
case bits are not tracked for individual vector elements, instead a
bit is demanded if it is demanded for any of the elements. This matches
the behavior of computeKnownBits in ValueTracking and
SimplifyDemandedBits in InstCombine.
Unlike the previous iteration of this patch, getDemandedBits() can now
again be called on arbirary (sized) instructions, even if they don't
have integer or vector of integer type. (For vector types the size of the
returned mask will now be the scalar size in bits though.)
The added LoopVectorize test case shows a case which triggered an
assertion failure with the previous attempt, because getDemandedBits()
was called on a pointer-typed instruction.
Differential Revision: https://reviews.llvm.org/D55297
llvm-svn: 348602
This change attempts to shrink scalar AND, OR and XOR instructions which take an immediate that isn't inlineable.
It performs:
AND s0, s0, ~(1 << n) -> BITSET0 s0, n
OR s0, s0, (1 << n) -> BITSET1 s0, n
AND s0, s1, x -> ANDN2 s0, s1, ~x
OR s0, s1, x -> ORN2 s0, s1, ~x
XOR s0, s1, x -> XNOR s0, s1, ~x
In particular, this catches setting and clearing the sign bit for fabs (and x, 0x7ffffffff -> bitset0 x, 31 and or x, 0x80000000 -> bitset1 x, 31).
llvm-svn: 348601
Inline cpu_specific versions referenced before the cpu_dispatch function
weren't properly emitted, since they hadn't been referred to. This
patch ensures that during resolver generation that all appropriate
versions are emitted.
Change-Id: I94c3766aaf9c75ca07a0ad8258efdbb834654ff8
llvm-svn: 348600
This reverts commit 65df29f9318ac13a633c0ce13b2b0bccf06e79ca.
AS suggested by @rsmith here: https://reviews.llvm.org/rL345839
I'm reverting this and solving the initial problem in a different way.
llvm-svn: 348595