Removes semicolons after if {} blocks, function definitions, etc.
I was able to apply the large OMPT patch cleanly on top of this one
with no conflicts.
llvm-svn: 314340
ExternalASTMerger has hitherto relied on being able to look up
any Decl through its named DeclContext chain. This works for
many cases, but causes problems for function-local structs,
which cannot be looked up in their containing FunctionDecl. An
example case is
void f() {
{ struct S { int a; }; }
{ struct S { bool b; }; }
}
It is not possible to lookup either of the two Ses individually
(or even to provide enough information to disambiguate) after
parsing is over; and there is typically no need to, since they
are invisible to the outside world.
However, ExternalASTMerger needs to be able to complete either
S on demand. This led to an XFAIL on test/Import/local-struct,
which this patch removes. The way the patch works is:
It defines a new data structure, ExternalASTMerger::OriginMap,
which clients are expected to maintain (default-constructing
if the origin does not have an ExternalASTMerger servicing it)
As DeclContexts are imported, if they cannot be looked up by
name they are placed in the OriginMap. This allows
ExternalASTMerger to complete them later if necessary.
As DeclContexts are imported from an origin that already has
its own OriginMap, the origins are forwarded – but only for
those DeclContexts that are actually used. This keeps the
amount of stored data minimal.
The patch also applies several improvements from review:
- Thoroughly documents the interface to ExternalASTMerger;
- Adds optional logging to help track what's going on; and
- Cleans up a bunch of braces and dangling elses.
Differential Revision: https://reviews.llvm.org/D38208
llvm-svn: 314336
Summary:
According to https://gcc.gnu.org/wiki/SplitStacks, the linker expects a zero-sized .note.GNU-split-stack section if split-stack is used (and also .note.GNU-no-split-stack section if it also contains non-split-stack functions), so it can handle the cases where a split-stack function calls non-split-stack function.
This change adds the sections if needed.
Fixes PR #34670.
Reviewers: thanm, rnk, luqmana
Reviewed By: rnk
Subscribers: llvm-commits
Patch by Cherry Zhang <cherryyz@google.com>
Differential Revision: https://reviews.llvm.org/D38051
llvm-svn: 314335
We already have zeroable bits in an APInt. We might as well use that instead of checking for an all zero BUILD_VECTOR.
Differential Revision: https://reviews.llvm.org/D37950
llvm-svn: 314332
In some cases the result psadbw is smaller than the type of the add that started the match. Currently in these cases we are using a smaller add and inserting the result.
If we instead combine the psadbw with zeros and use the full size add we can take advantage of implicit zeroing we get if we emit a narrower move before the add.
In a future patch, I want to make isel aware that the psadbw itself already zeroed the upper bits and remove the move entirely.
Differential Revision: https://reviews.llvm.org/D37453
llvm-svn: 314331
ToolChain::TranslateArgs() returns nullptr if no changes are performed.
This would currently mean that OpenMPArgs are lost. Patch fixes this
by falling back to simply using OpenMPArgs in that case.
Differential Revision: https://reviews.llvm.org/D38259
llvm-svn: 314330
AuxTriple is not set if host and device share a toolchain. Also,
removing an argument modifies the DAL which needs to be returned
for future use.
(Move tests back to offload-openmp.c as they are not related to GPUs.)
Differential Revision: https://reviews.llvm.org/D38258
llvm-svn: 314329
Parsing the argument after -Xopenmp-target allocates memory that needs
to be freed. Associate it with the final DerivedArgList after we know
which one will be used.
Differential Revision: https://reviews.llvm.org/D38257
llvm-svn: 314328
https://reviews.llvm.org/rL299952 merged '>>>' tokens into a single
JavaRightLogicalShift token. This broke formatting of generics nested more than
two deep, e.g. Foo<Bar<Baz>>> because the '>>>' now weren't three '>' for
parseAngle().
Luckily, just deleting JavaRightLogicalShift fixes things without breaking the
test added in r299952, so do that.
https://reviews.llvm.org/D38291
llvm-svn: 314325
reductions.
If both operands of the newly created SelectInst are Undefs the
resulting operation is also Undef, not SelectInst. It may cause crashes
when trying to propagate IR flags because function expects exactly
SelectInst instruction, nothing else.
llvm-svn: 314323
Summary:
__builtion___clear_cache maps to clear_cache function. On Linux,
clear_cache functions makes a syscall and does an abort if syscall fails.
Replace the abort by an assert so that non-debug builds do not abort
if the syscall fails.
Fixes PR34588.
Reviewers: rengolin, compnerd, srhines, peter.smith, joerg
Reviewed By: rengolin
Subscribers: aemerson, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D37788
llvm-svn: 314322
These changes faciliate positive behavior for arithmetic based select
expressions that match its translation criteria, keeping code size gated to
neutral or improved scenarios.
Patch by Michael Berg <michael_c_berg@apple.com>!
Differential Revision: https://reviews.llvm.org/D38263
llvm-svn: 314320
Summary:
Found when testing stage-2 build with D38101.
```
In file included from /build/llvm/lib/Support/Path.cpp:1045:
/build/llvm/lib/Support/Unix/Path.inc:648:14: error: comparison 'uint64_t' (aka 'unsigned long') > 18446744073709551615 is always false [-Werror,-Wtautological-constant-compare]
if (length > std::numeric_limits<size_t>::max()) {
~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
`size_t` is `uint64_t` here, apparently, thus any `uint64_t` value
always fits into `size_t`.
Initial patch was to use some preprocessor logic to
not check if the size is known to fit at compile time.
But Zachary Turner suggested using this approach.
Reviewers: Bigcheese, rafael, zturner, mehdi_amini
Reviewed by (via email): zturner
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38132
llvm-svn: 314312
Summary:
The current implementation of the allocator returning freed memory
back to OS (controlled by allocator_release_to_os_interval_ms flag)
requires sorting of the free chunks list, which has two major issues,
first, when free list grows to millions of chunks, sorting, even the
fastest one, is just too slow, and second, sorting chunks in place
is unacceptable for Scudo allocator as it makes allocations more
predictable and less secure.
The proposed approach is linear in complexity (altough requires quite
a bit more temporary memory). The idea is to count the number of free
chunks on each memory page and release pages containing free chunks
only. It requires one iteration over the free list of chunks and one
iteration over the array of page counters. The obvious disadvantage
is the allocation of the array of the counters, but even in the worst
case we support (4T allocator space, 64 buckets, 16 bytes bucket size,
full free list, which leads to 2 bytes per page counter and ~17M page
counters), requires just about 34Mb of the intermediate buffer (comparing
to ~64Gb of actually allocated chunks) and usually it stays under 100K
and released after each use. It is expected to be a relatively rare event,
releasing memory back to OS, keeping the buffer between those runs
and added complexity of the bookkeeping seems unnesessary here (it can
always be improved later, though, never say never).
The most interesting problem here is how to calculate the number of chunks
falling into each memory page in the bucket. Skipping all the details,
there are three cases when the number of chunks per page is constant:
1) P >= C, P % C == 0 --> N = P / C
2) C > P , C % P == 0 --> N = 1
3) C <= P, P % C != 0 && C % (P % C) == 0 --> N = P / C + 1
where P is page size, C is chunk size and N is the number of chunks per
page and the rest of the cases, where the number of chunks per page is
calculated on the go, during the page counter array iteration.
Among the rest, there are still cases where N can be deduced from the
page index, but they require not that much less calculations per page
than the current "brute force" way and 2/3 of the buckets fall into
the first three categories anyway, so, for the sake of simplicity,
it was decided to stick to those two variations. It can always be
refined and improved later, should we see that brute force way slows
us down unacceptably.
Reviewers: eugenis, cryptoad, dvyukov
Subscribers: kubamracek, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D38245
llvm-svn: 314311
Before this change using any of the -name*= command line options with an output
directory would result in a single file (functions.txt/functions.html)
containing the coverage for those specific functions. Now you get the same
directory structure as when not using any -name*= options.
Differential Revision: https://reviews.llvm.org/D38280
llvm-svn: 314310
Summary:
The root Uri is the workspace location and will be useful in the context of
indexing. We could also add more things to InitializeParams in order to
configure Clangd for C/C++ sepecific extensions.
Reviewers: ilya-biryukov, bkramer, krasimir, Nebiroth
Reviewed By: ilya-biryukov
Subscribers: ilya-biryukov
Tags: #clang-tools-extra
Differential Revision: https://reviews.llvm.org/D38093
llvm-svn: 314309
This was intended to be no-functional-change, but it's not - there's a test diff.
So I thought I should stop here and post it as-is to see if this looks like what was expected
based on the discussion in PR34603:
https://bugs.llvm.org/show_bug.cgi?id=34603
Notes:
1. The test improvement occurs because the existing 'LateSimplifyCFG' marker is not carried
through the recursive calls to 'SimplifyCFG()->SimplifyCFGOpt().run()->SimplifyCFG()'.
The parameter isn't passed down, so we pick up the default value from the function signature
after the first level. I assumed that was a bug, so I've passed 'Options' down in all of the
'SimplifyCFG' calls.
2. I split 'LateSimplifyCFG' into 2 bits: ConvertSwitchToLookupTable and KeepCanonicalLoops.
This would theoretically allow us to differentiate the transforms controlled by those params
independently.
3. We could stash the optional AssumptionCache pointer and 'LoopHeaders' pointer in the struct too.
I just stopped here to minimize the diffs.
4. Similarly, I stopped short of messing with the pass manager layer. I have another question that
could wait for the follow-up: why is the new pass manager creating the pass with LateSimplifyCFG
set to true no matter where in the pipeline it's creating SimplifyCFG passes?
// Create an early function pass manager to cleanup the output of the
// frontend.
EarlyFPM.addPass(SimplifyCFGPass());
-->
/// \brief Construct a pass with the default thresholds
/// and switch optimizations.
SimplifyCFGPass::SimplifyCFGPass()
: BonusInstThreshold(UserBonusInstThreshold),
LateSimplifyCFG(true) {} <-- switches get converted to lookup tables and loops may not be in canonical form
If this is unintended, then it's possible that the current behavior of dropping the 'LateSimplifyCFG'
setting via recursion was masking this bug.
Differential Revision: https://reviews.llvm.org/D38138
llvm-svn: 314308
InlineCost can understand Select IR now. This patch finds free Select IRs and
continue the propagation of SimplifiedValues, ConstantOffsetPtrs, and
SROAArgValues.
Differential Revision: https://reviews.llvm.org/D37198
llvm-svn: 314307
NFC.
Updated 8 regression tests to use -mattr instead of -mcpu flag as follows:
-mcpu=knl --> -mattr=+avx512f
-mcpu=skx --> -mattr=+avx512f,+avx512bw,+avx512vl,+avx512dq
The updates are as part of the preparation of a large commit to add all instruction scheduling for the SKX target.
Reviewers: delena, zvi, RKSimon
Differential Revision: https://reviews.llvm.org/D38222
Change-Id: I2381c9b5bb75ecacfca017243c22d054f6eddd14
llvm-svn: 314306
Added missing addrspacecast case in alignment computation
logic of pointer type emission in IR generation.
Differential Revision: https://reviews.llvm.org/D37804
llvm-svn: 314304
Summary: Test for checking if the mapping is performed correctly. This is a test initially included in Patch https://reviews.llvm.org/D29905
Reviewers: Hahnfeld, carlo.bertolli, caomhin, ABataev
Reviewed By: Hahnfeld
Subscribers: tra, cfe-commits
Differential Revision: https://reviews.llvm.org/D38040
llvm-svn: 314303
Allow the proper recognition of Enum values and global variables inside ms inline-asm memory / immediate expressions, as they require some additional overhead and treated incorrect if doesn't early recognized.
supersedes D33277, D35775
Corrsponds with D37412, D37413
llvm-svn: 314300
This patch makes analyzeBranch eliminate unconditional branch to the next instruction.
After basic blocks are re-organized by optimizers, such as machine block placement, a BB may end with an unconditional branch to the next (fallthrough) BB. This patch removes such redundant branch instruction.
Differential Revision: https://reviews.llvm.org/D37730
llvm-svn: 314297
As commented on D37849 and rL313547, AVX1 targets were missing a chance to use vmovmskpd for v4f64/v4i64 results for bool vector bitcasts
llvm-svn: 314293
This function can now track null pointer through simple pointer arithmetic,
such as '*&*(p + 2)' => 'p' and so on, displaying intermediate diagnostic pieces
for the user to understand where the null pointer is coming from.
Differential Revision: https://reviews.llvm.org/D37025
llvm-svn: 314290