Summary:
Found when testing stage-2 build with D38101.
```
In file included from /build/llvm/lib/Support/Path.cpp:1045:
/build/llvm/lib/Support/Unix/Path.inc:648:14: error: comparison 'uint64_t' (aka 'unsigned long') > 18446744073709551615 is always false [-Werror,-Wtautological-constant-compare]
if (length > std::numeric_limits<size_t>::max()) {
~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
`size_t` is `uint64_t` here, apparently, thus any `uint64_t` value
always fits into `size_t`.
Initial patch was to use some preprocessor logic to
not check if the size is known to fit at compile time.
But Zachary Turner suggested using this approach.
Reviewers: Bigcheese, rafael, zturner, mehdi_amini
Reviewed by (via email): zturner
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38132
llvm-svn: 314312
Summary:
The current implementation of the allocator returning freed memory
back to OS (controlled by allocator_release_to_os_interval_ms flag)
requires sorting of the free chunks list, which has two major issues,
first, when free list grows to millions of chunks, sorting, even the
fastest one, is just too slow, and second, sorting chunks in place
is unacceptable for Scudo allocator as it makes allocations more
predictable and less secure.
The proposed approach is linear in complexity (altough requires quite
a bit more temporary memory). The idea is to count the number of free
chunks on each memory page and release pages containing free chunks
only. It requires one iteration over the free list of chunks and one
iteration over the array of page counters. The obvious disadvantage
is the allocation of the array of the counters, but even in the worst
case we support (4T allocator space, 64 buckets, 16 bytes bucket size,
full free list, which leads to 2 bytes per page counter and ~17M page
counters), requires just about 34Mb of the intermediate buffer (comparing
to ~64Gb of actually allocated chunks) and usually it stays under 100K
and released after each use. It is expected to be a relatively rare event,
releasing memory back to OS, keeping the buffer between those runs
and added complexity of the bookkeeping seems unnesessary here (it can
always be improved later, though, never say never).
The most interesting problem here is how to calculate the number of chunks
falling into each memory page in the bucket. Skipping all the details,
there are three cases when the number of chunks per page is constant:
1) P >= C, P % C == 0 --> N = P / C
2) C > P , C % P == 0 --> N = 1
3) C <= P, P % C != 0 && C % (P % C) == 0 --> N = P / C + 1
where P is page size, C is chunk size and N is the number of chunks per
page and the rest of the cases, where the number of chunks per page is
calculated on the go, during the page counter array iteration.
Among the rest, there are still cases where N can be deduced from the
page index, but they require not that much less calculations per page
than the current "brute force" way and 2/3 of the buckets fall into
the first three categories anyway, so, for the sake of simplicity,
it was decided to stick to those two variations. It can always be
refined and improved later, should we see that brute force way slows
us down unacceptably.
Reviewers: eugenis, cryptoad, dvyukov
Subscribers: kubamracek, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D38245
llvm-svn: 314311
Before this change using any of the -name*= command line options with an output
directory would result in a single file (functions.txt/functions.html)
containing the coverage for those specific functions. Now you get the same
directory structure as when not using any -name*= options.
Differential Revision: https://reviews.llvm.org/D38280
llvm-svn: 314310
Summary:
The root Uri is the workspace location and will be useful in the context of
indexing. We could also add more things to InitializeParams in order to
configure Clangd for C/C++ sepecific extensions.
Reviewers: ilya-biryukov, bkramer, krasimir, Nebiroth
Reviewed By: ilya-biryukov
Subscribers: ilya-biryukov
Tags: #clang-tools-extra
Differential Revision: https://reviews.llvm.org/D38093
llvm-svn: 314309
This was intended to be no-functional-change, but it's not - there's a test diff.
So I thought I should stop here and post it as-is to see if this looks like what was expected
based on the discussion in PR34603:
https://bugs.llvm.org/show_bug.cgi?id=34603
Notes:
1. The test improvement occurs because the existing 'LateSimplifyCFG' marker is not carried
through the recursive calls to 'SimplifyCFG()->SimplifyCFGOpt().run()->SimplifyCFG()'.
The parameter isn't passed down, so we pick up the default value from the function signature
after the first level. I assumed that was a bug, so I've passed 'Options' down in all of the
'SimplifyCFG' calls.
2. I split 'LateSimplifyCFG' into 2 bits: ConvertSwitchToLookupTable and KeepCanonicalLoops.
This would theoretically allow us to differentiate the transforms controlled by those params
independently.
3. We could stash the optional AssumptionCache pointer and 'LoopHeaders' pointer in the struct too.
I just stopped here to minimize the diffs.
4. Similarly, I stopped short of messing with the pass manager layer. I have another question that
could wait for the follow-up: why is the new pass manager creating the pass with LateSimplifyCFG
set to true no matter where in the pipeline it's creating SimplifyCFG passes?
// Create an early function pass manager to cleanup the output of the
// frontend.
EarlyFPM.addPass(SimplifyCFGPass());
-->
/// \brief Construct a pass with the default thresholds
/// and switch optimizations.
SimplifyCFGPass::SimplifyCFGPass()
: BonusInstThreshold(UserBonusInstThreshold),
LateSimplifyCFG(true) {} <-- switches get converted to lookup tables and loops may not be in canonical form
If this is unintended, then it's possible that the current behavior of dropping the 'LateSimplifyCFG'
setting via recursion was masking this bug.
Differential Revision: https://reviews.llvm.org/D38138
llvm-svn: 314308
InlineCost can understand Select IR now. This patch finds free Select IRs and
continue the propagation of SimplifiedValues, ConstantOffsetPtrs, and
SROAArgValues.
Differential Revision: https://reviews.llvm.org/D37198
llvm-svn: 314307
NFC.
Updated 8 regression tests to use -mattr instead of -mcpu flag as follows:
-mcpu=knl --> -mattr=+avx512f
-mcpu=skx --> -mattr=+avx512f,+avx512bw,+avx512vl,+avx512dq
The updates are as part of the preparation of a large commit to add all instruction scheduling for the SKX target.
Reviewers: delena, zvi, RKSimon
Differential Revision: https://reviews.llvm.org/D38222
Change-Id: I2381c9b5bb75ecacfca017243c22d054f6eddd14
llvm-svn: 314306
Added missing addrspacecast case in alignment computation
logic of pointer type emission in IR generation.
Differential Revision: https://reviews.llvm.org/D37804
llvm-svn: 314304
Summary: Test for checking if the mapping is performed correctly. This is a test initially included in Patch https://reviews.llvm.org/D29905
Reviewers: Hahnfeld, carlo.bertolli, caomhin, ABataev
Reviewed By: Hahnfeld
Subscribers: tra, cfe-commits
Differential Revision: https://reviews.llvm.org/D38040
llvm-svn: 314303
Allow the proper recognition of Enum values and global variables inside ms inline-asm memory / immediate expressions, as they require some additional overhead and treated incorrect if doesn't early recognized.
supersedes D33277, D35775
Corrsponds with D37412, D37413
llvm-svn: 314300
This patch makes analyzeBranch eliminate unconditional branch to the next instruction.
After basic blocks are re-organized by optimizers, such as machine block placement, a BB may end with an unconditional branch to the next (fallthrough) BB. This patch removes such redundant branch instruction.
Differential Revision: https://reviews.llvm.org/D37730
llvm-svn: 314297
As commented on D37849 and rL313547, AVX1 targets were missing a chance to use vmovmskpd for v4f64/v4i64 results for bool vector bitcasts
llvm-svn: 314293
This function can now track null pointer through simple pointer arithmetic,
such as '*&*(p + 2)' => 'p' and so on, displaying intermediate diagnostic pieces
for the user to understand where the null pointer is coming from.
Differential Revision: https://reviews.llvm.org/D37025
llvm-svn: 314290
This API is used by checkers (and other entities) in order to track where does
a value originate from, by jumping from an expression value of which is equal
to that value to the expression from which this value has "appeared". For
example, it may be an lvalue from which the rvalue was loaded, or a function
call from which the dereferenced pointer was returned.
The function now avoids incorrectly unwrapping implicit lvalue-to-rvalue casts,
which caused crashes and incorrect intermediate diagnostic pieces. It also no
longer relies on how the expression is written when guessing what it means.
Fixes pr34373 and pr34731.
rdar://problem/33594502
Differential Revision: https://reviews.llvm.org/D37023
llvm-svn: 314287
Summary:
MSR instruction in Thumb2 does not support immediate operand.
Fix this by moving the condition for V7-M to Thumb2 since V7-M support
Thumb2 only. With this change, aeabi_cfcmp.s and aeabi_cdcmp.S files can
be assembled in Thumb2 mode. (This is split out from the review D38227).
Reviewers: compnerd, peter.smith, srhines, weimingz, rengolin, kristof.beyls
Reviewed By: compnerd
Subscribers: aemerson, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D38268
llvm-svn: 314284
This is "Bug 34688 - lld much slower than bfd when linking the linux kernel"
Inside copyRelocations() we have O(N*M) algorithm, where N - amount of
relocations and M - amount of symbols in symbol table. It isincredibly slow
for linking linux kernel.
Patch creates local search tables to speedup.
With this fix link time goes for me from 12.95s to 0.55s what is almost 23x
faster. (used release LLD).
Differential revision: https://reviews.llvm.org/D38129
llvm-svn: 314282
I implemented isTruncateFree in rL313533, this patch fixes the logic
to match my comment, as the previous logic was too general. Now the
only truncates that are free are i64 -> i32.
Differential Revision: https://reviews.llvm.org/D38234
llvm-svn: 314280
Summary:
NamespaceEndCommentsFixer did not fix namespace comments when the brace opening the namespace was not on the same line as the "namespace" keyword.
It occurs in Allman, GNU and Linux styles and whenever BraceWrapping.AfterNamespace is true.
Before:
```lang=cpp
namespace a
{
void f();
void g();
}
```
After:
```lang=cpp
namespace a
{
void f();
void g();
} // namespace a
```
Reviewers: krasimir
Reviewed By: krasimir
Subscribers: klimek, cfe-commits
Differential Revision: https://reviews.llvm.org/D37904
llvm-svn: 314279
This is necessary, but not sufficient, for having working SJLJ exception
handling on x86_64.
Differential Revision: https://reviews.llvm.org/D38254
llvm-svn: 314277
The callsite value is already stored indexed from 0 in
the _Unwind_Context struct. When accessed via the functions
_Unwind_GetIP and _Unwind_SetIP, the value is indexed from 1,
but those functions handle the offseting. When reading directly
from the struct here, we shouldn't subtract 1.
This matches the code generated by the ARM target, where SJLJ
exception handling is used by default on iOS.
This makes clang-built object files for 32 bit x86 mingw work when
linked with libgcc/libstdc++.
Differential Revision: https://reviews.llvm.org/D38251
llvm-svn: 314276
This matches the types of the struct members defined in
lib/CodeGen/SjLjEHPrepare.cpp, and the definition of this struct in libgcc.
Differential Revision: https://reviews.llvm.org/D38248
llvm-svn: 314275
Summary:
A new FDR metadata record will support logging a function call argument;
appending multiple metadata records will represent a sequence of arguments
meaning that "holes" are not representable by the buffer format. Each
call argument is currently a 64-bit value (useful for "this" pointers and
synchronization objects).
If present, we put this argument to the function call "entry" record it
belongs to, and alter its type to notify the user of its presence.
Reviewers: dberris
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32840
llvm-svn: 314269
This patch tries to transform cases like:
for (unsigned i = 0; i < N; i += 2) {
bool c0 = (i & 0x1) == 0;
bool c1 = ((i + 1) & 0x1) == 1;
}
To
for (unsigned i = 0; i < N; i += 2) {
bool c0 = true;
bool c1 = true;
}
This commit also update test/Transforms/IndVarSimplify/replace-srem-by-urem.ll to prevent constant folding.
Differential Revision: https://reviews.llvm.org/D38272
llvm-svn: 314266
stack pointer for apple's armv7 ABI. When in a frameless function
or in a prologue/epilogue where sp wasn't properly aligned, we could
try to make function calls with an unaligned sp; the expression
would crash.
llvm-svn: 314265
Keep space before or after the &/&& tokens, but not both. For example,
auto [x,y] = a;
auto &[xr, yr] = a; // LLVM style
auto& [xr, yr] = a; // google style
Differential Revision:https://reviews.llvm.org/D35743
llvm-svn: 314264