Building clang with -fno-pie generates slightly faster code. In my not-very-rigorous testing I saw about a 4% speed up using the clang test-suite sources.
llvm-svn: 253959
Summary:
This is a helper to perform cross-module import for ThinLTO. Right now
it is importing naively every possible called functions.
Reviewers: tejohnson
Subscribers: dexonsmith, llvm-commits
Differential Revision: http://reviews.llvm.org/D14914
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 253954
Summary:
This allows to query for a function in the map without creating an
entry, allowing to use a const FunctionInfoIndex.
Reviewers: tejohnson
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14912
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 253953
This patch detects the AVG pattern in vectorized code, which is simply
c = (a + b + 1) / 2, where a, b, and c have the same type which are vectors of
either unsigned i8 or unsigned i16. In the IR, i8/i16 will be promoted to
i32 before any arithmetic operations. The following IR shows such an example:
%1 = zext <N x i8> %a to <N x i32>
%2 = zext <N x i8> %b to <N x i32>
%3 = add nuw nsw <N x i32> %1, <i32 1 x N>
%4 = add nuw nsw <N x i32> %3, %2
%5 = lshr <N x i32> %N, <i32 1 x N>
%6 = trunc <N x i32> %5 to <N x i8>
and with this patch it will be converted to a X86ISD::AVG instruction.
The pattern recognition is done when combining instructions just before type
legalization during instruction selection. We do it here because after type
legalization, it is much more difficult to do pattern recognition based
on many instructions that are doing type conversions. Therefore, for
target-specific instructions (like X86ISD::AVG), we need to take care of type
legalization by ourselves. However, as X86ISD::AVG behaves similarly to
ISD::ADD, I am wondering if there is a way to legalize operands and result
types of X86ISD::AVG together with ISD::ADD. It seems that the current design
doesn't support this idea.
Tests are added for SSE2, AVX2, and AVX512BW and both i8 and i16 types of
variant vector sizes.
Differential revision: http://reviews.llvm.org/D14761
llvm-svn: 253952
Caller saved regs differ between SysV and Win64. Use the tail call available set to scavenge from.
Refactor register info to create new helper to get at tail call GPRs. Added a new test case for windows. Fixed up a number of X64 tests since now RCX is preferred over RDX on SysV.
Differential Revision: http://reviews.llvm.org/D14878
llvm-svn: 253927
With the '=' suffix now indicating which operands are output operands, it's
no longer as important to distinguish between a call's inputs and its outputs
using operand ordering, so we can go back to printing them in the normal order.
llvm-svn: 253925
This distinguishes input operands from output operands. This is something of
a syntactic experiment to see whether the mild amount of clutter this adds is
outweighed by the extra information it conveys to the reader.
llvm-svn: 253922
Summary: Adds the ability for callers to detect when saturation occurred on the result of saturating addition/multiplication.
Reviewers: davidxl, silvas, rsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14931
llvm-svn: 253921
Summary:
For relocation types that are known to not require stub functions, there
is no need to allocate extra space for the stub functions.
Reviewers: lhames, reames, maksfb
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14676
llvm-svn: 253920
Summary:
Change SectionEntry to keep track of the size of its underlying
allocation, and use that to bounds check advanceStubOffset.
Reviewers: lhames, andrew.w.kaylor, reames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14675
llvm-svn: 253919
Summary:
Remove naked access to the data members in `SectionEntry` and route
accesses through accessor functions. This makes it obvious how the
instances of the class are used, and will also facilitate adding bounds
checking to `advanceStubOffset` in a later change.
Reviewers: lhames, loladiro, andrew.w.kaylor
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14674
llvm-svn: 253918
autogenerated.
Also update existing test cases which appear to be generated by it and
weren't modified (other than addition of the header) by rerunning it.
llvm-svn: 253917
The new option is similar to the SampleProfile dump option.
- dump raw/indexed format into text profile format
- merge the profile and output into text profile format.
Note that Value Profiling data text format is not yet designed.
That functionality will be added later.
Differential Revision: http://reviews.llvm.org/D14894
llvm-svn: 253913
The existing coverage tracker counts the number of records that were used
from the input profile. An alternative view of coverage is to check how
many available samples were applied.
This way, if the profile contains several records with few samples, it
doesn't really matter much that they were not applied. The more
interesting records to apply are the ones that contribute many samples.
llvm-svn: 253912
The current approach to using get_local and set_local is to use them
implicitly, as register uses and defs. Introduce new copy instructions
which are themselves no-ops except for the get_local and set_local
that they imply, so that we use get_local and set_local consistently.
llvm-svn: 253905
Add a shared helper routine to read the function index from a file
and create/return the function index object. Use it in llvm-link and
llvm-lto.
llvm-svn: 253903
WebAssembly is currently using labels to end scopes, so for example a
loop scope looks like this:
BB0_0:
loop BB0_1
...
BB0_1:
with BB0_0 being the label of the first block not in the loop. This
requires that the label be printed even when it's only reachable via
fallthrough. To arrange this, insert a no-op LOOP_END instruction in
such cases at the end of the loop.
llvm-svn: 253901
Always starting blocks at the top of their containing loops works, but creates
unnecessarily deep nesting because it makes all blocks in a loop overlap.
Refine the BLOCK placement algorithm to start blocks at nearest common
dominating points instead, which significantly shrinks them and reduces
overlapping.
llvm-svn: 253876
Summary:
This change fixes the SaturatingMultiply<T>() function template to not cause undefined behavior with T=uint16_t.
Thanks to Richard Smith's contribution, it also no longer requires an integer division.
Patch by Richard Smith.
Reviewers: silvas, davidxl
Subscribers: rsmith, davidxl, llvm-commits
Differential Revision: http://reviews.llvm.org/D14845
llvm-svn: 253870
Disable custom handling of signed 32-bit and 64-bit integer divide.
Add test cases for both 32-bit and 64-bit integer overflow crashes.
llvm-svn: 253865