We lost copy semantics in r263730, because it only worked for a few very
specific cases. Move semantics don't have this issue. Sadly the
implementation is a bit messy but I don't know how to clean it up
without losing support for msvc 2013 :/
llvm-svn: 263785
Summary:
Reworked test case after buildbot failure on windows.
This patch adds base support for codegen of the target directive on the NVPTX device.
Reviewers: ABataev
Differential Revision: http://reviews.llvm.org/D17877
llvm-svn: 263783
On OS X, we have pthread_cond_timedwait_relative_np. TSan needs to intercept this API to avoid false positives when using condition variables.
Differential Revision: http://reviews.llvm.org/D18184
llvm-svn: 263782
Now that the resolved path cache stores the StringRef's, its
best to just always cache the results, even when realpath isn't
used. This way we'll still avoid the StringMap hashing and lookup.
This also conveniently reorganises this code in a way I need for
a future patch.
llvm-svn: 263777
ResolvedPaths was storing std::string's as a cache. We would then take those strings and look them up in the internString pool to get a unique StringRef for each path.
This patch changes ResolvedPaths to store the StringRef pointing in to the internString pool itself. This way, when getResolvedPath returns a string, we know we have the StringRef we would find in the pool anyway. We can avoid the duplicate memory of the std::string's, and also the time from the lookup.
Unfortunately my profiles show no runtime change here, but it should still save memory allocations which is nice.
Reviewed by Frederic Riss.
Differential Revision: http://reviews.llvm.org/D18259
llvm-svn: 263774
Summary:
It can hurt performance to prefetch ahead too much. Be conservative for
now and don't prefetch ahead more than 3 iterations on Cyclone.
Reviewers: hfinkel
Subscribers: llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D17949
llvm-svn: 263772
Summary:
And use this TTI for Cyclone. As it was explained in the original RFC
(http://thread.gmane.org/gmane.comp.compilers.llvm.devel/92758), the HW
prefetcher work up to 2KB strides.
I am also adding tests for this and the previous change (D17943):
* Cyclone prefetching accesses with a large stride
* Cyclone not prefetching accesses with a small stride
* Generic Aarch64 subtarget not prefetching either
Reviewers: hfinkel
Subscribers: aemerson, rengolin, llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D17945
llvm-svn: 263771
Summary:
This wires up the pass for Cyclone but keeps it off for now because we
need a few more TTIs.
The getPrefetchMinStride value is not very well tuned right now but it
works well with CFP2006/433.milc which motivated this.
Tests will be added as part of the upcoming large-stride prefetching
patch.
Reviewers: t.p.northover
Subscribers: llvm-commits, aemerson, hfinkel, rengolin
Differential Revision: http://reviews.llvm.org/D17943
llvm-svn: 263770
Summary: LLVM_PREFIX could be undefined if CMAKE_INSTALL_PREFIX were set to empty.
Reviewers: kparzysz, bkramer, chandlerc
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D17784
llvm-svn: 263766
A virtual index of -1u indicates that the subprogram's virtual index is
unrepresentable (for example, when using the relative vtable ABI), so do
not emit a DW_AT_vtable_elem_location attribute for it.
Differential Revision: http://reviews.llvm.org/D18236
llvm-svn: 263765
MSVC as usual:
C:\Buildbot\Slave\llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast\llvm.src\include\llvm/ADT/STLExtras.h(120):
error C2100: illegal indirection
C:\Buildbot\Slave\llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast\llvm.src\include\llvm/IR/Instructions.h(3966):
note: see reference to class template instantiation
'llvm::mapped_iterator<llvm::User::op_iterator,llvm::CatchSwitchInst::DerefFnTy>'
being compiled
This reverts commit e091dd63f1f34e043748e28ad160d3bc17731168.
llvm-svn: 263760
For fcmp, major concern about the following 6 cases is NaN result. The
comparison result consists of 4 bits, indicating lt, eq, gt and un (unordered),
only one of which will be set. The result is generated by fcmpu
instruction. However, bc instruction only inspects one of the first 3
bits, so when un is set, bc instruction may jump to to an undesired
place.
More specifically, if we expect an unordered comparison and un is set, we
expect to always go to true branch; in such case UEQ, UGT and ULT still
give false, which are undesired; but UNE, UGE, ULE happen to give true,
since they are tested by inspecting !eq, !lt, !gt, respectively.
Similarly, for ordered comparison, when un is set, we always expect the
result to be false. In such case OGT, OLT and OEQ is good, since they are
actually testing GT, LT, and EQ respectively, which are false. OGE, OLE
and ONE are tested through !lt, !gt and !eq, and these are true.
llvm-svn: 263753
idiom.
Most LLVM tool code exits immediately when an error is encountered and prints an
error message to stderr. The ExitOnError class supports this by providing two
call operators - one for Errors, and one for Expected<T>s. Calls to code that
can return Errors (or Expected<T>s) can use these calls to bail out on error,
and otherwise continue as if the operation had succeeded. E.g.
Error foo();
Expected<int> bar();
int main(int argc, char *argv[]) {
ExitOnError ExitOnErr;
ExitOnErr.setBanner(std::string("Error in ") + argv[0] + ":");
// Exit if foo returns an error. No need to manually check error return.
ExitOnErr(foo());
// Exit if bar returns an error, otherwise unwrap the contained int and
// continue.
int X = ExitOnErr(bar());
// ...
return 0;
}
llvm-svn: 263749
This reapplies r261552.
The VFS overlay mapping between virtual paths and real paths is done through
the 'external-contents' entries in YAML files, which contains hardcoded paths
to the real files.
When a module compilation crashes, headers are dumped into <name>.cache/vfs
directory and are mapped via the <name>.cache/vfs/vfs.yaml. The script
generated for reproduction uses -ivfsoverlay pointing to file to gather the
mapping between virtual paths and files inside <name>.cache/vfs. Currently, we
are only capable of reproducing such crashes in the same machine as they
happen, because of the hardcoded paths in 'external-contents'.
To be able to reproduce a crash in another machine, this patch introduces a new
option in the VFS yaml file called 'overlay-relative'. When it's equal to
'true' it means that the provided path to the YAML file through the
-ivfsoverlay option should also be used to prefix the final path for every
'external-contents'.
Example, given the invocation snippet "... -ivfsoverlay
<name>.cache/vfs/vfs.yaml" and the following entry in the yaml file:
"overlay-relative": "true",
"roots": [
...
"type": "directory",
"name": "/usr/include",
"contents": [
{
"type": "file",
"name": "stdio.h",
"external-contents": "/usr/include/stdio.h"
},
...
Here, a file manager request for virtual "/usr/include/stdio.h", that will map
into real path "/<absolute_path_to>/<name>.cache/vfs/usr/include/stdio.h.
This is a useful feature for debugging module crashes in machines other than
the one where the error happened.
Differential Revision: http://reviews.llvm.org/D17457
rdar://problem/24499339
llvm-svn: 263748
unordered_set::emplace and unordered_map::emplace construct a node, then
try to insert it. If insertion fails, the node gets deleted.
To avoid this unnecessary malloc traffic, check to see if the argument
to emplace has the appropriate key_type. If so, we can use that key
directly and delay the malloc until we're sure we're inserting something
new.
Test updates by Eric Fiselier, who rewrote the old allocation tests to
include the new cases.
There are two orthogonal future directions:
1. Apply the same optimization to set and map.
2. Extend the optimization to when the argument is not key_type, but can
be converted to it without side effects. Ideally, we could do this
whenever key_type is trivially destructible and the argument is
trivially convertible to key_type, but in practise the relevant type
traits "blow up sometimes". At least, we should catch a few simple
cases (such as when both are primitive types).
llvm-svn: 263746
Summary:
Use the new LoopVersioning facility (D16712) to add noalias metadata in
the vector loop if we versioned with memchecks. This can enable some
optimization opportunities further down the pipeline (see the included
test or the benchmark improvement quoted in D16712).
The test also covers the bug I had in the initial version in D16712.
The vectorizer did not previously use LoopVersioning. The reason is
that the vectorizer performs its transformations in single shot. It
creates an empty single-block vector loop that it then populates with
the widened, if-converted instructions. Thus creating an intermediate
versioned scalar loop seems wasteful.
So this patch (rather than bringing in LoopVersioning fully) adds a
special interface to LoopVersioning to allow the vectorizer to add
no-alias annotation while still performing its own versioning.
As the vectorizer propagates metadata from the instructions in the
original loop to the vector instructions we also check the pointer in
the original instruction and see if LoopVersioning can add no-alias
metadata based on the issued memchecks.
Reviewers: hfinkel, nadav, mzolotukhin
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17191
llvm-svn: 263744
Summary:
If we decide to version a loop to benefit a transformation, it makes
sense to record the now non-aliasing accesses in the newly versioned
loop. This allows non-aliasing information to be used by subsequent
passes.
One example is 456.hmmer in SPECint2006 where after loop distribution,
we vectorize one of the newly distributed loops. To vectorize we
version this loop to fully disambiguate may-aliasing accesses. If we
add the noalias markers, we can use the same information in a later DSE
pass to eliminate some dead stores which amounts to ~25% of the
instructions of this hot memory-pipeline-bound loop. The overall
performance improves by 18% on our ARM64.
The scoped noalias annotation is added in LoopVersioning. The patch
then enables this for loop distribution. A follow-on patch will enable
it for the vectorizer. Eventually this should be run by default when
versioning the loop but first I'd like to get some feedback whether my
understanding and application of scoped noalias metadata is correct.
Essentially my approach was to have a separate alias domain for each
versioning of the loop. For example, if we first version in loop
distribution and then in vectorization of the distributed loops, we have
a different set of memchecks for each versioning. By keeping the scopes
in different domains they can conveniently be defined independently
since different alias domains don't affect each other.
As written, I also have a separate domain for each loop. This is not
necessary and we could save some metadata here by using the same domain
across the different loops. I don't think it's a big deal either way.
Probably the best is to review the tests first to see if I mapped this
problem correctly to scoped noalias markers. I have plenty of comments
in the tests.
Note that the interface is prepared for the vectorizer which needs the
annotateInstWithNoAlias API. The vectorizer does not use LoopVersioning
so we need a way to pass in the versioned instructions. This is also
why the maps have to become part of the object state.
Also currently, we only have an AA-aware DSE after the vectorizer if we
also run the LTO pipeline. Depending how widely this triggers we may
want to schedule a DSE toward the end of the regular pass pipeline.
Reviewers: hfinkel, nadav, ashutosh.nema
Subscribers: mssimpso, aemerson, llvm-commits, mcrosier
Differential Revision: http://reviews.llvm.org/D16712
llvm-svn: 263743
I hit a crash in the bitcode reader on some corrupt input where an
MDString had somehow been attached to an instruction instead of an
MDNode. This input is pretty bogus, but we shouldn't be crashing on bad
input here.
This change adds error handling in all of the places where we
currently have unchecked casts from Metadata to MDNode, which means
we'll error out instead of crashing for that sort of input.
Unfortunately, I don't have tests. Hitting this requires flipping bits
in the input bitcode, and committing corrupt binary files to catch
these cases is a bit too opaque and unmaintainable.
llvm-svn: 263742
Summary: ...as that is apparently what MSVC does
Reviewers: rnk
Patch by Stephan Bergmann
Differential Revision: http://reviews.llvm.org/D15267
llvm-svn: 263738
Summary:
The gdb-remote async thread cannot modify thread state while the main thread
holds a lock on the state. Don't use locking thread iteration for bt all.
Specifically, the deadlock manifests when lldb attempts to JIT code to
symbolicate objective c while backtracing. As part of this code path,
SetPrivateState() is called on an async thread. This async thread will
block waiting for the thread_list lock held by the main thread in
CommandObjectIterateOverThreads. The main thread will also block on the
async thread during DoResume (although with a timeout), leading to a
deadlock. Due to the timeout, the deadlock is not immediately apparent,
but the inferior will be left in an invalid state after the bt all completes,
and objective-c symbols will not be successfully resolved in the backtrace.
Reviewers: andrew.w.kaylor, jingham, clayborg
Subscribers: sas, lldb-commits
Differential Revision: http://reviews.llvm.org/D18075
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 263735