This is a patch to implement pr30640.
When a 64bit constant has the same hi/lo words, we can use rldimi to copy the low word into high word of the same register.
This optimization caused failure of test case bperm.ll because of not optimal heuristic in function SelectAndParts64. It chooses AND or ROTATE to extract bit groups from a register, and OR them together. This optimization lowers the cost of loading 64bit constant mask used in AND method, and causes different code sequence. But actually ROTATE method is better in this test case. The reason is in ROTATE method the final OR operation can be avoided since rldimi can insert the rotated bits into target register directly. So this patch also enhances SelectAndParts64 to prefer ROTATE method when the two methods have same cost and there are multiple bit groups need to be ORed together.
Differential Revision: https://reviews.llvm.org/D25521
llvm-svn: 284276
layout for PIE binaries, ask the OS how much stack space is already in use to
avoid stack overflow if we are run with more than 512K of combined command line
arguments + environment variables.
llvm-svn: 284271
Eli noted this potential bug in the post-commit thread for:
https://reviews.llvm.org/rL284239
...but I'm not sure how to trigger it, so there's no test case yet.
llvm-svn: 284268
Summary:
We are using this helper for our 24-bit arithmetic combines, so we are now able to eliminate multi-use operations that mask the high-bits of 24-bit inputs (e.g. and x, 0xffffff)
Reviewers: arsenm, nhaehnle
Subscribers: tony-tye, arsenm, kzhuravl, wdng, nhaehnle, llvm-commits, yaxunl
Differential Revision: https://reviews.llvm.org/D24672
llvm-svn: 284267
Summary:
The main purpose of this new helper is to enable simplifying operations that
have multiple uses. SimplifyDemandedBits does not handle multiple uses
currently, and this new function makes it possible to optimize:
and v1, v0, 0xffffff
mul24 v2, v1, v1 ; Multiply ignoring high 8-bits.
To:
mul24 v2, v0, v0
Where before this would not be optimized, because v1 has multiple uses.
Reviewers: bogner, arsenm
Subscribers: nhaehnle, wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D24964
llvm-svn: 284266
This commit combines a couple of redundant functions that do availability
attribute context checking into a more correct/simpler one.
Differential revision: https://reviews.llvm.org/D25283
llvm-svn: 284265
X86. The pass optimizes as a unit the entire wide load + shuffles pattern
produced by interleaved vectorization. This initial patch optimizes one pattern
(64-bit elements interleaved by a factor of 4). Future patches will generalize
to additional patterns.
Patch by Farhana Aleen
Differential revision: http://reviews.llvm.org/D24681
llvm-svn: 284260
Use PackedRegisterRef to store the register information in the graph nodes.
This commit also removes support for virtual registers. It has never been
tested or used. It will be possible to add it back if there is a need.
llvm-svn: 284255
Summary: We need `__stosb` to be an intrinsic, because SecureZeroMemory function uses it without including intrin.h. Implementing it as a volatile memset is not consistent with MSDN specification, but it gives us target-independent IR while keeping the most important properties of `__stosb`.
Reviewers: rnk, hans, thakis, majnemer
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D25334
llvm-svn: 284253
Add support for loading multiple coverage readers into a single
CoverageMapping instance. This should make it easier to prepare a
unified coverage report for multiple binaries.
Differential Revision: https://reviews.llvm.org/D25535
llvm-svn: 284251
Summary:
This patch adds support for installing public headers in LLDB.framework, and symlinking the headers into the build directory.
While writing the patch I discovered a bug in CMake that prevents applying POST_BUILD steps to framework targets (https://gitlab.kitware.com/cmake/cmake/issues/16363).
I've implemented the support using POST_BUILD steps wrapped under a CMake version check with a TODO so that we can track the fix.
Reviewers: tfiala, zturner, spyffe
Subscribers: lldb-commits, mgorny
Differential Revision: https://reviews.llvm.org/D25570
llvm-svn: 284250
This is an improvement when compiling with llvm. llvm doesn't inline
the call to insert, so the align is always executed and shows up in
the profile.
With gcc the call to insert is inlined and the align computation moved
and done only if needed.
With this patch we explicitly only compute it if it is needed.
In the two tests with debug info, the speedup was
scylla
master 3.008959365
patch 2.932080942 1.02621974786x faster
firefox
master 6.709823604
patch 6.592387227 1.01781393795x faster
In all others the difference was in the noise.
llvm-svn: 284249
This change adds transformations such as:
zext(or(setcc(eq, (cmp x, 0)), setcc(eq, (cmp y, 0))))
To:
srl(or(ctlz(x), ctlz(y)), log2(bitsize(x))
This optimisation is beneficial on Jaguar architecture only, where lzcnt has a good reciprocal throughput.
Other architectures such as Intel's Haswell/Broadwell or AMD's Bulldozer/PileDriver do not benefit from it.
For this reason the change also adds a "HasFastLZCNT" feature which gets enabled for Jaguar.
Differential Revision: https://reviews.llvm.org/D23446
llvm-svn: 284248
Summary:
* Describe new (3.3) parameter attribute group encoding, leaving old encoding there with a note about legacy
* Bring TYPE_BLOCK docs up to date
* Remove docs about obsolete (pre 3.0) TYPE_SYMTAB_BLOCK, TST_CODE_ENTRY
* Fix a couple of incorrect comments and remove one unused enum definition along the way
This addresses https://llvm.org/bugs/show_bug.cgi?id=28941.
Patch by: Ismail Badawi <ibadawi@cisco.com>
Differential Revision: https://reviews.llvm.org/D25623
llvm-svn: 284246
This test was apparently checking for 2 independent folds, but we have
plenty of tests for those individual folds already. We are lacking
vector tests, however, because we don't have the shift folds for vectors.
llvm-svn: 284243
Prefer add/zext because they are better supported in terms of value-tracking.
Note that the backend should be prepared for this IR canonicalization
(including vector types) after:
https://reviews.llvm.org/rL284015
Differential Revision: https://reviews.llvm.org/D25135
llvm-svn: 284241
Summary:
This adds a diagnostic to the misc-use-after-move check that is output when the
use happens on a later loop iteration than the move, for example:
A a;
for (int i = 0; i < 10; ++i) {
a.foo();
std::move(a);
}
This situation can be confusing to users because, in terms of source code
location, the use is above the move. This can make it look as if the warning
is a false positive, particularly if the loop is long but the use and move are
close together.
In cases like these, misc-use-after-move will now output an additional
diagnostic:
a.cpp:393:7: note: the use happens in a later loop iteration than the move
Reviewers: hokein
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D25612
llvm-svn: 284235
This is 30646.
PT_OPENBSD_RANDOMIZE
The array element specifies the location and size of a part of the memory image of the program that must be filled with random data before any code in the object is executed. The memory region specified by a segment of this type may overlap the region specified by a PT_GNU_RELRO segment, in which case the intersection will be filled with random data before being marked read-only.
Reference links:
http://man.openbsd.org/OpenBSD-current/man5/elf.5c494713c45
Differential revision: https://reviews.llvm.org/D25469
llvm-svn: 284234
Summary:
The header guard generated by clang-move isn't always a perfect
style, just avoid getting the header included multiple times during
compiling period.
Also, we can use llvm-Header-guard clang-tidy check to correct the guard
automatically.
Reviewers: ioeric
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D25610
llvm-svn: 284233
This fixes a small omission where even when __external_threading is provided,
we attempt to declare a pthread based threading API. Instead, we should leave
out everything for the __external_threading header to take care of.
The __threading_support header provides a proof-of-concept externally threaded
libc++ variant when _LIBCPP_HAS_THREAD_API_EXTERNAL is defined. But if the
__external_threading header is present, we should exclude all of that POC stuff.
Reviewers: EricWF
Differential revision: https://reviews.llvm.org/D25468
llvm-svn: 284232