This is recommit of r256028 with minor fixes in unittests:
CodeGen/Mips/eh.ll
CodeGen/Mips/insn-zero-size-bb.ll
Original commit message:
When identifying blocks post-dominated by an unreachable-terminated block
in BranchProbabilityInfo, consider only the edge to the normal destination
block if the terminator is InvokeInst and let calcInvokeHeuristics() decide
edge weights for the InvokeInst.
llvm-svn: 256202
If a loop has a sufficiently large amount of compute instruction in its loop
body, it is unlikely that our rewrite of the loop iterators introduces large
performance changes. As Polly can also apply beneficical optimizations (such
as parallelization) to such loop nests, we mark them as profitable.
This option is currently "disabled" by default, but can be used to run
experiments. If enabled by setting it e.g. to 40 instructions, we currently
see some compile-time increases on LNT without any significant run-time
changes.
llvm-svn: 256199
This patch transforms truncation between vectors of integers into
X86ISD::PACKUS/PACKSS operations during DAG combine. We don't do it in
lowering phase because after type legalization, the original truncation
will be turned into a BUILD_VECTOR with each element that is extracted
from a vector and then truncated, and from them it is difficult to do
this optimization. This greatly improves the performance of truncations
on some specific types.
Cost table is updated accordingly.
Differential revision: http://reviews.llvm.org/D14588
llvm-svn: 256194
When targeting COFF, it is required that a comdat section to
have a global obj with the same name as the comdat (except for
comdats with select kind to be associative). This fix makes
sure that the comdat is keyed on the data variable for COFF.
Also improved test coverage for this.
llvm-svn: 256193
When using blocks, a byref structure is created to represent the
closure. The "byref.layout" field of this structure is an i8*. However,
some 'inline' layouts are represented as i64's, not i8*'s.
Prior to r246985 we cast the i64 'inline' layout to an i8* before
assigning it into the byref structure. This patch brings the cast back
and adds a regression test.
The original version of this patch was too invasive. This version only adds the
cast to BuildByrefLayout.
Differential Revision: http://reviews.llvm.org/D15674
rdar://23713871
llvm-svn: 256190
LiveDebugVariables unconditionally propagates all DBG_VALUE down the
dominator tree, which happens to work fine if there already is another
DBG_VALUE or the DBG_VALUE happends to describe a single-assignment vreg
but is otherwise wrong if the DBG_VALUE is coming from only one of the
predecessors.
In r255759 we introduced a proper data flow analysis scheduled after
LiveDebugVariables that correctly propagates DBG_VALUEs across basic block
boundaries. With the new pass in place, the incorrect propagation in
LiveDebugVariables can be retired witout loosing any of the benefits
where LiveDebugVariables happened to do the right thing.
llvm-svn: 256188
When using blocks, a byref structure is created to represent the
closure. The "byref.layout" field of this structure is an i8*. However,
some 'inline' layouts are represented as i64's, not i8*'s.
Prior to r246985 we cast the i64 'inline' layout to an i8* before
assigning it into the byref structure. This patch brings the cast back
and adds a regression test.
rdar://23713871
llvm-svn: 256185
This patch adds PIE executable support for aarch64-linux. It adds
two more segments:
- 0x05500000000-0x05600000000: 39-bits PIE program segments
- 0x2aa00000000-0x2ab00000000: 42-bits PIE program segments
Fortunately it is possible to use the same transformation formula for
the new segments range with some adjustments in shadow to memory
formula (it adds a constant offset based on the VMA size).
A simple testcase is also added, however it is disabled on x86 due the
fact it might fail on newer kernels [1].
[1] https://git.kernel.org/linus/d1fd836dcf00d2028c700c7e44d2c23404062c90
llvm-svn: 256184
Summary:
These register has different encodings on CI and VI, so we add pseudo
FLAT_SCRACTH registers to be used before MC, and subtarget specific
registers to be used by the MC layer.
Reviewers: arsenm
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D15661
llvm-svn: 256178
This patch adds to the target description two additional patterns for matching
extract-extend operations to SMOV. The patterns catch the v16i8-to-i64 and
v8i16-to-i64 cases. The existing patterns miss these cases because the
extracted elements must first be legalized to i32, resulting in any_extend
nodes.
This was originally implemented as a DAG combine (r255895), but was reverted
due to failing out-of-tree tests.
llvm-svn: 256176
The patch adds support for R_MIPS_PC16, R_MIPS_PC19_S2, R_MIPS_PC21_S2,
R_MIPS_PC26_S2, R_MIPS_PCHI16, R_MIPS_PCLO16 relocations handling.
llvm-svn: 256172
This allows the AsmMatcherEmitter to properly tokenize the AsmStrings for
load and store instructions. This is a step towards asm parsing.
llvm-svn: 256166
This fixes a bug introduced by the ThinLTO metadata linking patch
r255909. The assert is overly-strict and while useful in development of
the patch, doesn't seem interesting to keep.
Fixes PR25907.
llvm-svn: 256161
Disable post-ra scheduler for perturbed tests to appease the bots and to
preserve the history of the tests.
http://reviews.llvm.org/D15652
llvm-svn: 256158