Summary:
Currently Cuda plugin supports loading of the single image, though we
may have the executable with the several images, if it has target
regions inside of the dynamically loaded library. Patch allows to load
multiple images.
Reviewers: grokos
Subscribers: guansong, openmp-commits, kkwli0
Differential Revision: https://reviews.llvm.org/D49036
llvm-svn: 336569
This patch reorganizes the loop scheduling code in order to allow hierarchical
scheduling to use it more effectively. In particular, the goal of this patch
is to separate the algorithmic parts of the scheduling from the thread
logistics code.
Moves declarations & structures to kmp_dispatch.h for easier access in
other files. Extracts the algorithmic part of __kmp_dispatch_init() and
__kmp_dispatch_next() into __kmp_dispatch_init_algorithm() and
__kmp_dispatch_next_algorithm(). The thread bookkeeping logic is still kept in
__kmp_dispatch_init() and __kmp_dispatch_next(). This is done because the
hierarchical scheduler needs to access the scheduling logic without the
bookkeeping logic. To prepare for new pointer in dispatch_private_info_t, a
new flags variable is created which stores the ordered and nomerge flags instead
of them being in two separate variables. This will keep the
dispatch_private_info_t structure the same size.
Differential Revision: https://reviews.llvm.org/D47961
llvm-svn: 336568
In generic data-sharing mode we are allowed to not globalize local
variables that escape their declaration context iff they are declared
inside of the parallel region. We can do this because L2 parallel
regions are executed sequentially and, thus, we do not need to put
shared local variables in the global memory.
llvm-svn: 336567
Summary:
This patch fixes a problem with retrieving a function symbol by an address in a nested block. In the current implementation of ResolveSymbolContext function it retrieves a symbol with PDB_SymType::None and then checks if found symbol's tag equals to PDB_SymType::Function. So, if nested block's symbol was found, ResolveSymbolContext does not resolve a function.
It is very simple to reproduce this. For example, in the next program
```
int main() {
auto r = 0;
for (auto i = 1; i <= 10; i++) {
r += i & 1 + (i - 1) & 1 - 1;
}
return r;
}
```
if we will stop inside the cycle and will do a backtrace, the top element will be broken. But how we can test this? I thought to add an option to lldb-test to allow search a function by address, but the address may change when the compiler will be changed.
Patch by: Aleksandr Urakov
Reviewers: asmith, labath, zturner
Reviewed By: asmith, labath
Subscribers: stella.stamenova, llvm-commits
Differential Revision: https://reviews.llvm.org/D47939
llvm-svn: 336564
These are preliminary changes that attempt to use C++11 Atomics in the runtime.
We are expecting better portability with this change across architectures/OSes.
Here is the summary of the changes.
Most variables that need synchronization operation were converted to generic
atomic variables (std::atomic<T>). Variables that are updated with combined CAS
are packed into a single atomic variable, and partial read/write is done
through unpacking/packing
Patch by Hansang Bae
Differential Revision: https://reviews.llvm.org/D47903
llvm-svn: 336563
As discussed in D49047 / D48987, shift-by-undef produces poison,
so we can't use undef vector elements in that case..
Note that we need to extend this for poison-generating flags,
and there's a proposal to create poison from FMF in D47963,
llvm-svn: 336562
When implementing the DWARF accelerator tables in dsymutil I ran into an
assertion in the assembler. Debugging these kind of issues is a lot
easier when looking at the assembly instead of debugging the assembler
itself. Since it's only a matter of creating an AsmStreamer instead of a
MCObjectStreamer it made sense to turn this into a (hidden) dsymutil
feature.
Differential revision: https://reviews.llvm.org/D49079
llvm-svn: 336561
Summary:
To allow bitcode built by old compiler to pass the current verifer,
BitcodeReader needs to auto infer the correct runtime preemption from
linkage and visibility for GlobalValues.
Since llvm-6.0 bitcode already contains the new field but can be
incorrect in some cases, the attribute needs to be recomputed all the
time in BitcodeReader. This will make all the GVs has dso_local marked
correctly if read from bitcode, and it should still allow the verifier
to catch mistakes in optimization passes.
This should fix PR38009.
Reviewers: sfertile, vsk
Reviewed By: vsk
Subscribers: dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D49039
llvm-svn: 336560
This patch adds the target call back relaxTlsLdToLe to support TLS relaxation
from local dynamic to local exec model.
Differential Revision: https://reviews.llvm.org/D48293
llvm-svn: 336559
This is almost NFC, but there could be some case where the original
code had undefs in the constants (rather than just the shuffle mask),
and we'll use safe constants rather than undefs now.
The FIXME noted in foldShuffledBinop() is already visible in existing
tests, so correcting that is the next step.
llvm-svn: 336558
These patterns mapped (v2f64 (X86vzmovl (v2f64 (scalar_to_vector FR64:$src)))) to a MOVSD and an zeroing XOR. But the complexity of a pattern for (v2f64 (X86vzmovl (v2f64))) that selects MOVQ is artificially and hides this MOVSD pattern.
Weirder still, the SSE version of the pattern was explicitly blocked on SSE41, but yet we had copied it to AVX and AVX512.
llvm-svn: 336556
This patch introduces a VPValue in VPBlockBase to represent the condition
bit that is used as successor selector when a block has multiple successors.
This information wasn't necessary until now, when we are about to introduce
outer loop vectorization support in VPlan code gen.
Reviewers: fhahn, rengolin, mkuper, hfinkel, mssimpso
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D48814
llvm-svn: 336554
Summary: This is not enabled in the global-symbol-builder or dynamic index yet.
Reviewers: sammccall
Reviewed By: sammccall
Subscribers: ilya-biryukov, MaskRay, jkorous, cfe-commits
Differential Revision: https://reviews.llvm.org/D49028
llvm-svn: 336553
This patch adds support for the following instructions:
CNTB CNTH - Determine the number of active elements implied by
CNTW CNTD the named predicate constant, multiplied by an
immediate, e.g.
cnth x0, vl8, #16
CNTP - Count active predicate elements, e.g.
cntp x0, p0, p1.b
counts the number of active elements in p1, predicated
by p0, and stores the result in x0.
llvm-svn: 336552
Summary:
The library has graduated from clangd to llvm/Support.
This is a mechanical change to move to the new API and remove the old one.
Main API changes:
- namespace clang::clangd::json --> llvm::json
- json::Expr --> json::Value
- Expr::asString() etc --> Value::getAsString() etc
- unsigned longs need a cast (due to r336541 adding lossless integer support)
Reviewers: ilya-biryukov
Subscribers: mgorny, ioeric, MaskRay, jkorous, omtcyfz, cfe-commits
Differential Revision: https://reviews.llvm.org/D49077
llvm-svn: 336549
This patch completes support for shifts, which include:
- LSL - Logical Shift Left
- LSLR - Logical Shift Left, Reversed form
- LSR - Logical Shift Right
- LSRR - Logical Shift Right, Reversed form
- ASR - Arithmetic Shift Right
- ASRR - Arithmetic Shift Right, Reversed form
- ASRD - Arithmetic Shift Right for Divide
In the following variants:
- Predicated shift by immediate - ASR, LSL, LSR, ASRD
e.g.
asr z0.h, p0/m, z0.h, #1
(active lanes of z0 shifted by #1)
- Unpredicated shift by immediate - ASR, LSL*, LSR*
e.g.
asr z0.h, z1.h, #1
(all lanes of z1 shifted by #1, stored in z0)
- Predicated shift by vector - ASR, LSL*, LSR*
e.g.
asr z0.h, p0/m, z0.h, z1.h
(active lanes of z0 shifted by z1, stored in z0)
- Predicated shift by vector, reversed form - ASRR, LSLR, LSRR
e.g.
lslr z0.h, p0/m, z0.h, z1.h
(active lanes of z1 shifted by z0, stored in z0)
- Predicated shift left/right by wide vector - ASR, LSL, LSR
e.g.
lsl z0.h, p0/m, z0.h, z1.d
(active lanes of z0 shifted by wide elements of vector z1)
- Unpredicated shift left/right by wide vector - ASR, LSL, LSR
e.g.
lsl z0.h, z1.h, z2.d
(all lanes of z1 shifted by wide elements of z2, stored in z0)
*Variants added in previous patches.
llvm-svn: 336547
As noted in D48987, there are many different ways for this transform to go wrong.
In particular, the poison potential for shifts means we have to more careful with those ops.
I added tests to make that behavior visible for all of the different cases that I could find.
This is a partial fix. To make this review easier, I did not make changes for the single binop
pattern (handled in foldSelectShuffleWith1Binop()). I also left out some potential optimizations
noted with TODO comments. I'll follow-up once we're confident that things are correct here.
The goal is to correct all marked FIXME tests to either avoid the shuffle transform or do it safely.
Note that distinguishing when the shuffle mask contains undefs and using getBinOpIdentity() allows
for some improvements to div/rem patterns, so there are wins along with the missed opportunities
and fixes.
Differential Revision: https://reviews.llvm.org/D49047
llvm-svn: 336546
Support for SVE's TBL instruction for programmable table
lookup/permute using vector of element indices, e.g.
tbl z0.d, { z1.d }, z2.d
stores elements from z1, indexed by elements from z2, into z0.
llvm-svn: 336544
This is a short-term fix for PR38093.
For now, we llvm::report_fatal_error if the instruction builder finds an
unsupported instruction in the instruction stream.
We need to revisit this fix once we start addressing PR38101.
Essentially, we need a better framework for error handling.
llvm-svn: 336543
Summary:
This patch adds a new "integer" ValueType, and renames Number -> Double.
This allows us to preserve the full precision of int64_t when parsing integers
from the wire, or constructing from an integer.
The API is unchanged, other than giving asInteger() a clearer contract.
In addition, always output doubles with enough precision that parsing will
reconstruct the same double.
Reviewers: simon_tatham
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46209
llvm-svn: 336541
Summary:
To avoid wasting time deserializing them on code completion and
further reparses.
We do not use the comments anyway, because we cannot rely on the file
contents staying the same for reparses that reuse the prebuilt
preamble PCH.
Reviewers: sammccall
Reviewed By: sammccall
Subscribers: ioeric, MaskRay, jkorous, cfe-commits
Differential Revision: https://reviews.llvm.org/D48943
llvm-svn: 336540
Summary:
Will be used in clangd, see the follow-up change.
Clangd does not use comments read from PCH to avoid crashes due to
changed contents of the file. However, reading them considerably slows
down code completion on files with large preambles.
Reviewers: sammccall
Reviewed By: sammccall
Subscribers: ioeric, cfe-commits
Differential Revision: https://reviews.llvm.org/D48942
llvm-svn: 336539
Summary:
To avoid doing extra work of processing headers in the preamble
mutilple times in parallel.
Reviewers: sammccall
Reviewed By: sammccall
Subscribers: javed.absar, ioeric, MaskRay, jkorous, cfe-commits
Differential Revision: https://reviews.llvm.org/D48940
llvm-svn: 336538
r335553 with the non-trivial unswitching of switches.
The code correctly updated most aspects of the CFG and analyses, but
missed some crucial aspects:
1) When multiple cases have the same successor, we unswitch that
a single time and replace the switch with a direct branch. The CFG
here is correct, but the target of this direct branch may have had
a PHI node with multiple entries in it.
2) When we still have to clone a successor of the switch into an
unswitched copy of the loop, we'll delete potentially multiple edges
entering this successor, not just one.
3) We also have to delete multiple edges entering the successors in the
original loop when they have to be retained.
4) When the "retained successor" *also* occurs as a case successor, we
just assert failed everywhere. This doesn't happen very easily
because its always valid to simply drop the case -- the retained
successor for switches is always the default successor. However, it
is likely possible through some contrivance of different loop passes,
unrolling, and simplifying for this to occur in practice and
certainly there is nothing "invalid" about the IR so this pass needs
to handle it.
5) In the case of #4, we also will replace these multiple edges with
a direct branch much like in #1 and need to collapse the entries in
any PHI nodes to a single enrty.
All of this stems from the delightful fact that the same successor can
show up in multiple parts of the switch terminator, and each of these
are considered a distinct edge for the purpose of PHI nodes (and
iterating the successors and predecessors) but not for unswitching
itself, the dominator tree, or many other things. For the record,
I intensely dislike this "feature" of the IR in large part because of
the complexity it causes in passes like this. We already have a ton of
logic building sets and handling duplicates, and we just had to add
a bunch more.
I've added a complex test case that covers all five of the above failure
modes. I've also added a variation on it where #4 and #5 occur in loop
exit, adding fun where we have an LCSSA PHI node with "multiple entries"
despite have dedicated exits. There were no additional issues found by
this, but it seems a useful corner case to cover with testing.
One thing that working on all of this code has made painfully clear for
me as well is how amazingly inefficient our PHI node representation is
(in terms of the in-memory data structures and the APIs used to update
them). This code has truly marvelous complexity bounds because every
time we remove an entry from a PHI node we do a linear scan to find it
and then a linear update to the data structure to remove it. We could in
theory batch all of the PHI node updates into a single linear walk of
the operands making this much more efficient, but the APIs fight hard
against this and the fact that we have to handle duplicates in the
peculiar manner we do (removing all but one in some cases) makes even
implementing that very tedious and annoying. Anyways, none of this is
new here or specific to loop unswitching. All code in LLVM that updates
PHI node operands suffers from these problems.
llvm-svn: 336536
Summary:
This consists of four main parts:
- an type json::Expr representing JSON values of dynamic kind, which can be
composed, inspected, and modified
- a JSON parser from string -> json::Expr
- a JSON printer from json::Expr -> string, with optional pretty-printing
- a convention for mapping json::Expr <=> native types (fromJSON/toJSON)
Mapping functions are provided for primitives (e.g. int, vector) and the
ObjectMapper helper helps implement fromJSON for struct/object types.
Based on clangd's usage, a couple of places I'd appreciate review attention:
- fromJSON returns only bool. A richer error-signaling mechanism may be useful
to provide useful messages, or let recursive fromJSONs (containers/structs)
do careful error recovery.
- should json::obj be always explicitly written (like json::ary)
- there's no streaming parse API. I suspect there are some simple wins like
a callback API where the document is a long array, and each element is small.
But this can probably be bolted on easily when we see the need.
Reviewers: bkramer, labath
Subscribers: mgorny, ilya-biryukov, ioeric, MaskRay, llvm-commits
Differential Revision: https://reviews.llvm.org/D45753
llvm-svn: 336534
This patch adds support for:
UZP1 Concatenate even elements from two vectors
UZP2 Concatenate odd elements from two vectors
TRN1 Interleave even elements from two vectors
TRN2 Interleave odd elements from two vectors
With variants for both data and predicate vectors, e.g.
uzp1 z0.b, z1.b, z2.b
trn2 p0.s, p1.s, p2.s
llvm-svn: 336531
When emitting the DWARF accelerator tables from dsymutil, we don't have
a DwarfDebug instance and we use a custom class to represent Dwarf
compile units. This patch adds an interface AccelTableWriterInfo to
abstract these from the Dwarf5AccelTableWriter, so we can have a custom
implementation for this in dsymutil.
Differential revision: https://reviews.llvm.org/D49031
llvm-svn: 336529
Summary:
PrecompiledPreamble hasn't checked if the system dependencies changed
before. This resulted in invalid preamble not being rebuilt if headers
that changed were found in -isystem include paths.
This pattern is sometimes used to avoid showing warnings in third
party code, so we want to correctly handle those cases.
Tested in clangd, see the follow-up patch.
Reviewers: sammccall, ioeric
Reviewed By: sammccall
Subscribers: omtcyfz, cfe-commits
Differential Revision: https://reviews.llvm.org/D48946
llvm-svn: 336528
Summary: On constructors that do not take the end source location, it was not imported. Fixes test from D47698 / rC336269.
Reviewers: martong, a.sidorin, balazske, xazax.hun, a_sidorin
Reviewed By: martong, a_sidorin
Subscribers: a_sidorin, rnkovacs, cfe-commits
Differential Revision: https://reviews.llvm.org/D48941
llvm-svn: 336523
Summary:
PGOMemOPSize only modifies CFG in a couple of places; thus we can preserve the DominatorTree with little effort.
When optimizing SQLite with -O3, this patch can decrease 3.8% of the numbers of nodes traversed by DFS and 5.7% of the times DominatorTreeBase::recalculation is called.
Reviewers: kuhar, davide, dmgreen
Reviewed By: dmgreen
Subscribers: mzolotukhin, vsk, llvm-commits
Differential Revision: https://reviews.llvm.org/D48914
llvm-svn: 336522
Reapply D47195:
Currently BreakBeforeParameter is set to true everytime message receiver spans multiple lines, e.g.:
```
[[object block:^{
return 42;
}] aa:42 bb:42];
```
will be formatted:
```
[[object block:^{
return 42;
}] aa:42
bb:42];
```
even though arguments could fit into one line. This change fixes this behavior.
llvm-svn: 336521
Reduce penalty for aligning ObjC method arguments using the colon alignment as
this is the canonical way.
Trying to fit a whole expression into one line should not force other line
breaks (e.g. when ObjC method expression is a part of other expression).
llvm-svn: 336520