The x86 "sitofp i64 to double" dag combine, in 32-bit mode, lowers sitofp
directly to X86ISD::FILD (or FILD_FLAG). This should not be done in soft-float mode.
llvm-svn: 252042
Summary: "std::unique_ptr<int>" is not the same type as "std::unique_ptr<int, std::default_delete<int>>", unless we insert a "hasCanonicalType" in the middle. Probably it also happens in other cases related to default template argument.
Reviewers: klimek
Subscribers: alexfh, cfe-commits
Differential Revision: http://reviews.llvm.org/D14291
llvm-svn: 252041
This was breaking the modules build and is being reverted while we reach consensus on the right way to solve this layering problem. This reverts commit r251785.
llvm-svn: 252040
Summary: On Windows we have to take UTF16 encoded env vars and convert them to UTF8. This patch fixes CopyEnvironment helper function used by process unit tests.
Reviewers: yaron.keren
Subscribers: yaron.keren, llvm-commits
Differential Revision: http://reviews.llvm.org/D14278
llvm-svn: 252039
Intended to make later changes simpler. Exposes
`getBundleOperandsStartIndex` and `getBundleOperandsEndIndex`, and uses
them for the computation in `getNumTotalBundleOperands`.
llvm-svn: 252037
This new builtin template allows for incredibly fast instantiations of
templates like std::integer_sequence.
Performance numbers follow:
My work station has 64 GB of ram + 20 Xeon Cores at 2.8 GHz.
__make_integer_seq<std::integer_sequence, int, 90000> takes 0.25
seconds.
std::make_integer_sequence<int, 90000> takes unbound time, it is still
running. Clang is consuming gigabytes of memory.
Differential Revision: http://reviews.llvm.org/D13786
llvm-svn: 252036
In my previous change to CVP (251606), I made CVP much more aggressive about trying to constant fold comparisons. This patch is a reversal in direction. Rather than being agressive about every compare, we restore the non-block local restriction for most, and then try hard for compares feeding returns.
The motivation for this is two fold:
* The more I thought about it, the less comfortable I got with the possible compile time impact of the other approach. There have been no reported issues, but after talking to a couple of folks, I've come to the conclusion the time probably isn't justified.
* It turns out we need to know the context to leverage the full power of LVI. In particular, asking about something at the end of it's block (the use of a compare in a return) will frequently get more precise results than something in the middle of a block. This is an implementation detail, but it's also hard to get around since mid-block queries have to reason about possible throwing instructions and don't get to use most of LVI's block focused infrastructure. This will become particular important when combined with http://reviews.llvm.org/D14263.
Differential Revision: http://reviews.llvm.org/D14271
llvm-svn: 252032
This reverts commit e59c95ca936f5a0a8abb987b8605fd8bf82b03b6.
This was a mistake on my part. The real problem was with my
environment. I was using a release interpreter to try to load
my debug extension module. I noticed this after I finally managed
to get into my extension module's init method, and then it segfaulted
with heap errors due to mismatched CRT (debug vs. release)
llvm-svn: 252030
There is no point in having invoke safepoints handled differently than the
call safepoints. All relevant decisions could be made by looking at whether
or not gc.result and gc.relocate lay in a same basic block. This change will
allow to lower call safepoints with relocates and results in a different
basic blocks. See test case for example.
Differential Revision: http://reviews.llvm.org/D14158
llvm-svn: 252028
In Python 2, a debug extension module required an _d suffix, so
for example the extension module `_lldb` would be backed by the file
`_lldb_d.pyd` if built in debug mode, and `_lldb.pyd` if built in
release mode. In Python 2, although undocumented, this seems to
no longer be the case, and even for a debug extension module, the
interpreter will only look for the `_lldb.pyd` name.
llvm-svn: 252026
By default in Python 3, check_output() returns a program's output as
an encoded byte sequence. This means it returns a Py3 `bytes` object,
which cannot be compared to a string since it's a different fundamental
type.
Although it might not be correct from a purist standpoint, from a
practical one we can assume that all output is encoded in the default
locale, in which case using universal_newlines=True will decode it
according to the current locale. Anyway, universal_newlines also
has the nice behavior that it converts \r\n to \n on Windows platforms
so this makes parsing code easier, should we need that. So it seems
like a win/win.
llvm-svn: 252025
Summary:
The goal of this pass is to perform store-to-load forwarding across the
backedge of a loop. E.g.:
for (i)
A[i + 1] = A[i] + B[i]
=>
T = A[0]
for (i)
T = T + B[i]
A[i + 1] = T
The pass relies on loop dependence analysis via LoopAccessAnalisys to
find opportunities of loop-carried dependences with a distance of one
between a store and a load. Since it's using LoopAccessAnalysis, it was
easy to also add support for versioning away may-aliasing intervening
stores that would otherwise prevent this transformation.
This optimization is also performed by Load-PRE in GVN without the
option of multi-versioning. As was discussed with Daniel Berlin in
http://reviews.llvm.org/D9548, this is inferior to a more loop-aware
solution applied here. Hopefully, we will be able to remove some
complexity from GVN/MemorySSA as a consequence.
In the long run, we may want to extend this pass (or create a new one if
there is little overlap) to also eliminate loop-indepedent redundant
loads and store that *require* versioning due to may-aliasing
intervening stores/loads. I have some motivating cases for store
elimination. My plan right now is to wait for MemorySSA to come online
first rather than using memdep for this.
The main motiviation for this pass is the 456.hmmer loop in SPECint2006
where after distributing the original loop and vectorizing the top part,
we are left with the critical path exposed in the bottom loop. Being
able to promote the memory dependence into a register depedence (even
though the HW does perform store-to-load fowarding as well) results in a
major gain (~20%). This gain also transfers over to x86: it's
around 8-10%.
Right now the pass is off by default and can be enabled
with -enable-loop-load-elim. On the LNT testsuite, there are two
performance changes (negative number -> improvement):
1. -28% in Polybench/linear-algebra/solvers/dynprog: the length of the
critical paths is reduced
2. +2% in Polybench/stencils/adi: Unfortunately, I couldn't reproduce this
outside of LNT
The pass is scheduled after the loop vectorizer (which is after loop
distribution). The rational is to try to reuse LAA state, rather than
recomputing it. The order between LV and LLE is not critical because
normally LV does not touch scalar st->ld forwarding cases where
vectorizing would inhibit the CPU's st->ld forwarding to kick in.
LoopLoadElimination requires LAA to provide the full set of dependences
(including forward dependences). LAA is known to omit loop-independent
dependences in certain situations. The big comment before
removeDependencesFromMultipleStores explains why this should not occur
for the cases that we're interested in.
Reviewers: dberlin, hfinkel
Subscribers: junbuml, dberlin, mssimpso, rengolin, sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13259
llvm-svn: 252017
Summary: Will be used by the LoopLoadElimination pass.
Reviewers: hfinkel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D13258
llvm-svn: 252016
Summary:
The functions use LAI and MemoryDepChecker classes so they need to be
defined after those definitions outside of the Dependence class.
Will be used by the LoopLoadElimination pass.
Reviewers: hfinkel
Subscribers: rengolin, llvm-commits
Differential Revision: http://reviews.llvm.org/D13257
llvm-svn: 252015
A profile of an LTO link of Chrome revealed that we were spending some
~30-50% of execution time in the function Constant::getRelocationInfo(),
which is called from TargetLoweringObjectFile::getKindForGlobal() and in turn
from TargetMachine::getNameWithPrefix().
It turns out that we only need the result of getKindForGlobal() when
targeting Mach-O, so this change moves the relevant part of the logic to
TargetLoweringObjectFileMachO.
NFCI.
Differential Revision: http://reviews.llvm.org/D14168
llvm-svn: 252014
I am not adding a test case for this since I don't know how portable the __fp16 type is between compilers and I don't want to break the test suite.
<rdar://problem/22375079>
llvm-svn: 252012
The printed name and the parsed assembler names weren't the same.
I'm not sure which name SC prints these as, but I think it's this one.
llvm-svn: 252010
If the requested SGPR was not actually aligned, it was
accepted and rounded down instead of rejected.
Also fix an assert if the range is an invalid size.
llvm-svn: 252009
Summary:
Add support for wasm's select operator, and lower LLVM's select DAG node
to it.
Reviewers: sunfish
Subscribers: dschuff, llvm-commits, jfb
Differential Revision: http://reviews.llvm.org/D14295
llvm-svn: 252002
This does not support TPOFF32 relocations to local symbols as the address calculations are separate. Support for this will be a separate patch.
llvm-svn: 251998