This reapplies r347767 (originally reviewed at: https://reviews.llvm.org/D55000)
with a fix for the missing std::move of the Error returned by the call to
Pipeline::runCycle().
Below is the original commit message from r347767.
If a user only cares about the overall latency, then the best/quickest way is to
change method Pipeline::run() so that it returns the total number of cycles to
the caller.
When the simulation pipeline is run, the number of cycles (or an error) is
returned from method Pipeline::run().
The advantage is that no hardware event listener is needed for computing that
latency. So, the whole process should be faster (and simpler - at least for that
particular use case).
llvm-svn: 347795
Fixed emission of the target regions found in the virtual functions.
Previously we may end up with the situation when those regions could be
skipped.
llvm-svn: 347793
Summary:
Starting in Clang 8.0 and GCC 8.0, `alignof` and `__alignof` return different values in same cases. Specifically `alignof` and `_Alignof` return the minimum alignment for a type, where as `__alignof` returns the preferred alignment. libc++ currently uses `__alignof` but means to use `alignof`. See llvm.org/PR39713
This patch introduces the macro `_LIBCPP_ALIGNOF` so we can control which spelling gets used.
This patch does not introduce any ABI guard to provide the old behavior with newer compilers. However, if we decide that is needed, this patch makes it trivial to implement.
I think we should commit this change immediately, and decide what we want to do about the ABI afterwards.
Reviewers: ldionne, EricWF
Reviewed By: EricWF
Subscribers: christof, libcxx-commits
Differential Revision: https://reviews.llvm.org/D54814
llvm-svn: 347787
Unlike most cost model functions this code makes a lot of table lookups without using the results from getTypeLegalizationCost. This means 512-bit vectors can be looked up even when the type isn't legal.
This patch adds a check around the two tables that contain 512-bit types to make sure that neither of the types would be split by type legalization. Meaning 512 bit types are illegal. I wanted to write this in a somewhat generic way that uses type legalization query hooks. But if prefered, I can switch to just using is512BitVector and the subtarget feature.
Differential Revision: https://reviews.llvm.org/D54984
llvm-svn: 347786
This fixes some of scalarization costs reported for sext/zext using avx512bw. This does not fix all scalarization costs being reported. Just the worst.
I've restricted this only to combinations of types that are legal with avx512bw like v32i1/v64i1/v32i16/v64i8 and conversions between vXi1 and vXi8/vXi16 with legal vXi8/vXi16 result types.
Differential Revision: https://reviews.llvm.org/D54979
llvm-svn: 347785
Expansion of SIGN_EXTEND_INREG can create a VSRAI instruction. If there is already a VSRAI after it, we should combine them into a larger VSRAI
Differential Revision: https://reviews.llvm.org/D54959
llvm-svn: 347784
Summary:
std::dynarray had been proposed for C++14, but it was pulled out from C++14
and there are no plans to standardize it anymore.
Reviewers: mclow.lists, EricWF
Subscribers: mgorny, christof, jkorous, dexonsmith, arphaman, libcxx-commits
Differential Revision: https://reviews.llvm.org/D54801
llvm-svn: 347783
In PR39807 we incorrectly handle circumstances where calls are common'd
from conditional blocks into the parent BB. Calls that can be inlined
must always have DebugLocs, however we strip them during commoning, which
the IR verifier asserts on.
Fix this by using applyMergedLocation: it will perform the same DebugLoc
stripping of conditional Locs, but will also generate an unknown location
DebugLoc that satisfies the requirement for inlinable calls to always have
locations.
Some of the prior logic for selecting a DebugLoc is now likely redundant;
I'll generate a follow-up to remove it (involves editing more regression
tests).
Differential Revision: https://reviews.llvm.org/D54997
llvm-svn: 347782
This patch also makes getPltEntryOffset a non-member function because
it doesn't depend on any private members of the TargetInfo class.
I tried a few different ideas, and it seems this change fits in best to me.
Differential Revision: https://reviews.llvm.org/D54981
llvm-svn: 347781
Summary: Left only the constructors that are actually required, and marked the move constructors as deleted. They are not used anymore and we were never sure they've actually worked correctly.
Reviewers: george.karpenkov, NoQ
Reviewed By: george.karpenkov
Subscribers: xazax.hun, baloghadamsoftware, szepet, a.sidorin, Szelethus, donat.nagy, dkrupp
Differential Revision: https://reviews.llvm.org/D54974
llvm-svn: 347777
This commit caused failures because it failed to correctly handle cases where
we hoist a phi, then hoist a use of that phi, then have to rehoist that use. We
need to make sure that we rehoist the use to _after_ the hoisted phi, which we
do by always rehoisting to the immediate dominator instead of just rehoisting
everything to the original preheader.
An option is also added to control whether control flow is hoisted, which is
off in this commit but will be turned on in a subsequent commit.
Differential Revision: https://reviews.llvm.org/D52827
llvm-svn: 347776
This adds support in the RISCVAsmParser the storing of Subtarget feature bits to a stack so that they can be pushed/popped to enable/disable multiple features at once.
Differential Revision: https://reviews.llvm.org/D46424
Patch by Lewis Revill.
llvm-svn: 347774
Combine
sat(sat(X + C1) + C2) -> sat(X + (C1+C2))
and
sat(sat(X - C1) - C2) -> sat(X - (C1+C2))
if the sign of C1 and C2 matches.
In the unsigned case we can compute C1+C2 with saturating arithmetic,
and InstSimplify will reduce this just to the saturation value. For
the signed case, we cannot perform the simplification if the result
of the addition overflows.
This change is part of https://reviews.llvm.org/D54534.
llvm-svn: 347773
Canonicalize ssub.sat(X, C) to ssub.sat(X, -C) if C is constant and
not signed minimum. This will help further optimizations to apply.
This change is part of https://reviews.llvm.org/D54534.
llvm-svn: 347772
Always-overflow was already determined for unsigned addition, but
not subtraction. This patch establishes parity.
This allows us to perform some additional simplifications for
signed saturating subtractions.
This change is part of https://reviews.llvm.org/D54534.
llvm-svn: 347771
If ValueTracking can determine that the add/sub can newer overflow,
replace it with the corresponding nuw/nsw add/sub.
Additionally, for the unsigned case, if ValueTracking determines
that the add/sub always overflows, replace the result with the
saturation value.
This change is part of https://reviews.llvm.org/D54534.
llvm-svn: 347770
If a saturating add intrinsic has one constant argument, make sure
it is on the RHS. This will simplify further transformations.
This change is part of https://reviews.llvm.org/D54534.
llvm-svn: 347769
If a user only cares about the overall latency, then the best/quickest way is to
change method Pipeline::run() so that it returns the total number of cycles to
the caller.
When the simulation pipeline is run, the number of cycles (or an error) is
returned from method Pipeline::run().
The advantage is that no hardware event listener is needed for computing that
latency. So, the whole process should be faster (and simpler - at least for that
particular use case).
llvm-svn: 347767
On python3, use bytes for reading and applying the patch file, rather
than str. This fixes encoding issues when applying patches with
python3.X (reported by zturner).
Also, simplify and speed up "svn update" via svn's "--parents"
argument, instead of manually computing and supplying the list of
parent directories to update.
llvm-svn: 347766
extract_symbols.py (introduced in D18826) expects all of its library arguments
to be in the same directory - typically <config>/lib. DynamicLibraryLib.lib is
instead to be found in lib/<config>.
This patch intended to make DynamicLibraryLib.lib be created in <config>/lib
alongside most of the other libraries.
I previously tried passing absolute paths to extract_symbols.py but this
generated command lines that were too long for Visual Studio 2015: D54587
Differential Revision: https://reviews.llvm.org/D54701
llvm-svn: 347764
Summary:
This is a NFC as we do not import non-odr vague linkage when computing
for import list for a module.
Reviewers: tejohnson, pcc
Subscribers: inglorion, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D54928
llvm-svn: 347763
This check is about preventing exceptions from being thrown before main() executes, and assigning a lambda (rather than calling it) to a global object cannot throw any exceptions.
llvm-svn: 347761
Summary:
If the original reduction root instruction was vectorized, it might be
removed from the tree. It means that the insertion point may become
invalidated and the whole vectorization of the reduction leads to the
incorrect output result.
The ReductionRoot instruction must be marked as externally used so it
could not be removed. Otherwise it might cause inconsistency with the
cost model and we may end up with too optimistic optimization.
Reviewers: RKSimon, spatel, hfinkel, mkuper
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D54955
llvm-svn: 347759
This was reverted in r347656 due to me thinking it caused a miscompile of
Chromium. Turns out it was the Chromium code that was broken.
llvm-svn: 347756
Summary:
Include IndexBenchmark in check-clangd to make sure we won't forget to update
it when doing breaking changes; also fix an out-of-date test input.
Reviewers: ilya-biryukov
Subscribers: mgorny, ioeric, MaskRay, jkorous, arphaman, kadircet, cfe-commits
Differential Revision: https://reviews.llvm.org/D54998
llvm-svn: 347753
Before this patch, the following stores in `merge_fail` would fail to be
merged, while they would get merged in `merge_ok`:
```
void use(unsigned long long *);
void merge_fail(unsigned key, unsigned index)
{
unsigned long long args[8];
args[0] = key;
args[1] = index;
use(args);
}
void merge_ok(unsigned long long *dst, unsigned a, unsigned b)
{
dst[0] = a;
dst[1] = b;
}
```
The reason is that `getMemOpBaseImmOfs` would return false for FI base
operands.
This adds support for this.
Differential Revision: https://reviews.llvm.org/D54847
llvm-svn: 347747
Currently, instructions doing memory accesses through a base operand that is
not a register can not be analyzed using `TII::getMemOpBaseRegImmOfs`.
This means that functions such as `TII::shouldClusterMemOps` will bail
out on instructions using an FI as a base instead of a register.
The goal of this patch is to refactor all this to return a base
operand instead of a base register.
Then in a separate patch, I will add FI support to the mem op clustering
in the MachineScheduler.
Differential Revision: https://reviews.llvm.org/D54846
llvm-svn: 347746