Fixing bots failure. test/ExecutionEngine/RuntimeDyld/SystemZ/cfi-relo-pc64.s
requires SystemZ backend. Mark the test as unsupported if the backend is not
available.
llvm-svn: 269470
Summary: This way we can get rid of one of the fields in the .def file.
Reviewers: llvm-commits
Subscribers: zturner
Differential Revision: http://reviews.llvm.org/D20251
llvm-svn: 269461
When setting the frame pointer, the offset from SP is calculated based on the
stack slot it gets allocated, but this slot is in turn based on the order of
the CSR list so that list should match the order we actually save the registers
in. Mostly it did, but in the edge-case of MachO AAPCS targets it was wrong.
llvm-svn: 269459
Recent changes to the instruction selection code exposed a problem where
a dead node was not removed on time. This node had both input and output
chains, which lead to an apparent cycle.
llvm-svn: 269458
- Insert one nop for each high level statement instead of two
- Do not insert nop before prologue
Differential Revision: http://reviews.llvm.org/D20215
llvm-svn: 269452
Most immediates are printed in Aarch64InstPrinter using 'formatImm' macro,
but not all of them.
Implementation contains following rules:
- floating point immediates are always printed as decimal
- signed integer immediates are printed depends on flag settings
(for negative values 'formatImm' macro prints the value as i.e -0x01
which may be convenient when imm is an address or offset)
- logical immediates are always printed as hex
- the 64-bit immediate for advSIMD, encoded in "a🅱️c:d:e:f:g:h" is always printed as hex
- the 64-bit immedaite in exception generation instructions like:
brk, dcps1, dcps2, dcps3, hlt, hvc, smc, svc is always printed as hex
- the rest of immediates is printed depends on availability
of -print-imm-hex
Signed-off-by: Maciej Gabka <maciej.gabka@arm.com>
Signed-off-by: Paul Osmialowski <pawel.osmialowski@arm.com>
Differential Revision: http://reviews.llvm.org/D16929
llvm-svn: 269446
This patch adds basic support for MachO::load_command. Load command types and sizes are encoded in the YAML and expanded back into MachO.
The YAML doesn't yet support load command structs, that is coming next. In the meantime as a temporary measure when writing MachO files the load commands are padded with zeros so that the generated binary is valid.
llvm-svn: 269442
Summary: When the MCJIT generates ELF code, some DWARF data requires 64-bit PC-relative relocation (R_390_PC64). This patch adds support for R_390_PC64 relocation to RuntimeDyld::resolveSystemZRelocation, to avoid an assertion failure.
Reviewers: uweigand
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20033
llvm-svn: 269436
Summary: This change fix the bug in isProfitableToUseMemset() where MaxIntSize shoule be in byte, not bit.
Reviewers: arsenm, joker.eph, mcrosier
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20176
llvm-svn: 269433
This reverts commit r269428, as it breaks the LLDB build. We need to
understand how to change LLDB in the same way as LLC before landing this
again.
llvm-svn: 269432
Without a diagnostic handler installed, llc's behaviour is to exit on the first
error that it encounters. This is very different from the behaviour of clang
and other front ends, which try to gather as many errors as possible before
exiting.
This commit adds a diagnostic handler to llc, allowing it to find and report
more than one error. The old behaviour is preserved under a flag (-exit-on-error).
Some of the tests fail with the new diagnostic handler, so they have to use the
new flag in order to run under the previous behaviour. Some of these are known
bugs, others need further investigation. Ideally, we should fix the tests and
remove the flag at some point in the future.
Patch by Diana Picus.
llvm-svn: 269428
*We don't currently handle the edge case constants (min/max values), so it's not a complete
canonicalization.
To fully solve the motivating bugs, we need to enhance this to recognize a zero vector
too because that's a ConstantAggregateZero which is a ConstantData, not a ConstantVector
or a ConstantDataVector.
Differential Revision: http://reviews.llvm.org/D17859
llvm-svn: 269426
It's not entirely clear why R_MICROMIPS_(GOT|HI16|LO16) are evaluated
incorrectly in a small number of the LNT tests at this point. However, it's not
related to the STO_MIPS_MICROMIPS issue.
At this point all the microMIPS-related changes of r268900 have been reverted.
llvm-svn: 269410
We only really need this to be true for SIFixSGPRCopies.
I'm not sure there's any way this could happen before that point.
Fixes a case where MachineCSE could introduce a cross block
scc use.
llvm-svn: 269391
Summary:
...loop after the last iteration.
This is really hard to do correctly. The core problem is that we need to
model liveness through the induction PHIs from iteration to iteration in
order to get the correct results, and we need to correctly de-duplicate
the common subgraphs of instructions feeding some subset of the
induction PHIs. All of this can be driven either from a side effect at
some iteration or from the loop values used after the loop finishes.
This patch implements this by storing the forward-propagating analysis
of each instruction in a cache to recall whether it was free and whether
it has become live and thus counted toward the total unroll cost. Then,
at each sink for a value in the loop, we recursively walk back through
every value that feeds the sink, including looping back through the
iterations as needed, until we have marked the entire input graph as
live. Because we cache this, we never visit instructions more than twice
-- once when we analyze them and put them into the cache, and once when
we count their cost towards the unrolled loop. Also, because the cache
is only two bits and because we are dealing with relatively small
iteration counts, we can store all of this very densely in memory to
avoid this from becoming an excessively slow analysis.
The code here is still pretty gross. I would appreciate suggestions
about better ways to factor or split this up, I've stared too long at
the algorithmic side to really have a good sense of what the design
should probably look at.
Also, it might seem like we should do all of this bottom-up, but I think
that is a red herring. Specifically, the simplification power is *much*
greater working top-down. We can forward propagate very effectively,
even across strange and interesting recurrances around the backedge.
Because we use data to propagate, this doesn't cause a state space
explosion. Doing this level of constant folding, etc, would be very
expensive to do bottom-up because it wouldn't be until the last moment
that you could collapse everything. The current solution is essentially
a top-down simplification with a bottom-up cost accounting which seems
to get the best of both worlds. It makes the simplification incremental
and powerful while leaving everything dead until we *know* it is needed.
Finally, a core property of this approach is its *monotonicity*. At all
times, the current UnrolledCost is a conservatively low estimate. This
ensures that we will never early-exit from the analysis due to exceeding
a threshold when if we had continued, the cost would have gone back
below the threshold. These kinds of bugs can cause incredibly hard to
track down random changes to behavior.
We could use a techinque similar (but much simpler) within the inliner
as well to avoid considering speculated code in the inline cost.
Reviewers: chandlerc
Subscribers: sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D11758
llvm-svn: 269388
Summary:
Currently we consider such instructions as simplified, which is incorrect,
because if their user isn't simplified, we can't actually simplify them too.
This biases our estimates of profitability: for instance the analyzer expects
much more gains from unrolling memcpy loops than there actually are.
Reviewers: hfinkel, chandlerc
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17365
llvm-svn: 269387
In verbose mode, we emit a warning if the DWOId of a skeleton CU
mismatches the DWOId of the referenced module. This patch updates the
cached DWOId after a module has been loaded to the DWOId of the module
on disk (instead of storing the DWOId we expected to load). This
allows us to correctly emit the mismatch warning for all subsequent
object files that want to import the same module. This patch also
ensures both warnings are only emitted in verbose mode.
rdar://problem/26214027
llvm-svn: 269383
This change implements the transformation in processInstruction() for the
LDR rt, =expression to MOV rt, expression when the expression can be evaluated
and can fit into the immediate field of the MOV or a MVN.
Across the ARM and Thumb instruction sets there are several cases to consider,
each with a different range of representatble constants.
In ARM we have:
* Modified immediate (All ARM architectures)
* MOVW (v6t2 and above)
In Thumb we have:
* Modified immediate (v6t2, v7m and v8m.mainline)
* MOVW (v6t2, v7m, v8.mainline and v8m.baseline)
* Narrow Thumb MOV that can be used in an IT block (non flag-setting)
If the immediate fits any of the available alternatives then we make the transformation.
Fixes 25722.
Patch by Peter Smith.
llvm-svn: 269354
Alter instances in the test-suite that use immediates that can be represented
in the immediate field of a MOV. The reason for doing this is that when the
LDR rt,=imm transformation to MOV rt, imm the existing tests do not need to
be modified.
Required by the patch that fixes PR25722.
Patch by Peter Smith.
llvm-svn: 269353
I've added the reserved field as an "optional" in YAML, but I've added asserts in the yaml2macho code to enforce that the field is present in mach_header_64, but not in mach_header.
llvm-svn: 269320
Summary:
This expands on r269179 to fix an additional case that was not covered by our
tests. The assembler temporary is not needed when the .cprestore offset fits
inside a simm16 and it is not an error to use it inside a '.set noat' in this
case.
Reviewers: emaste, seanbruno, sdardis
Subscribers: dsanders, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20199
llvm-svn: 269295
As explained in r269196, microMIPS has a special case that is not correctly
implemented in LLVM. If we have a symbol 'foo' which is equivalent to
'.text+0x10'. The value of an R_MICROMIPS_LO16 relocation using 'foo' is
'foo+0x11' and not 'foo+0x10'. The in-place addend should therefore be 0x11.
This commit reverts a little more of the effect of r268900 by keeping the
symbol when the STO_MIPS_MICROMIPS flag is set for R_MIPS_GPREL32 relocations.
This fixes SingleSource/UnitTests/2003-08-11-VaListArg, and
SingleSource/UnitTests/2003-05-07-VarArgs for microMIPS.
I believe there are additional relocations that have the same issue (e.g.
R_MIPS_64, and R_MIPS_GPREL16) but for now I'm focusing on restoring our
internal buildbots back to the green state we had in r268899.
llvm-svn: 269294
For BITREVERSE, bit shifting/masking every bit in a vector element is a very lengthy procedure.
If the input vector type is a whole multiple of bytes wide then we can split this into a BSWAP shuffle stage (to reverse at the byte level) and then a BITREVERSE stage applied to each byte. Most vector capable targets can efficiently BSWAP using shuffles resulting in a considerable reduction in instructions.
With this patch targets would only need to implement a target specific vXi8 BITREVERSE implementation to efficiently reverse most legal vector types.
Differential Revision: http://reviews.llvm.org/D19978
llvm-svn: 269290
Summary:
This eliminates the default case for N64 that was left out of r269047.
The change to R_MIPS_SUB is needed in this patch to make this testable since
%lo(%neg(%gp_rel(foo))) and %hi(%neg(%gp_rel(foo))) remain the only ways to get
a compound relocation from the assembler.
Reviewers: sdardis, rafael
Subscribers: dsanders, llvm-commits, sdardis
Differential Revision: http://reviews.llvm.org/D20097
llvm-svn: 269280
While promoting nodes in PPCTargetLowering::DAGCombineExtBoolTrunc, it is
possible for one of the nodes to be replaced by another. To make sure we do not
visit the deleted nodes, and to make sure we visit the replacement nodes, use a
list of HandleSDNodes to track the to-be-promoted nodes during the promotion
process.
The same fix has been applied to the analogous code in
PPCTargetLowering::DAGCombineTruncBoolExt.
Fixes PR26985.
llvm-svn: 269272
Shifts beyond the bitwidth are undef but SCCP resolved them to zero.
Instead, DTRT and resolve them to undef.
This reimplements the transform which caused PR27712.
llvm-svn: 269269
The promote alloca pass would attempt to promote an alloca with
a select, icmp, or phi user, even though the other operand was
from a non-promotable source, producing a select on two different
pointer types.
Only do this if we know that both operands derive from the same
alloca. In the future we should be able to relax this to an alloca
which will also be promoted.
llvm-svn: 269265
This new verifier rule lets us unambigously pick a calling convention
when creating a new declaration for
`@llvm.experimental.deoptimize.<ty>`. It is also congruent with our
lowering strategy -- since all calls to `@llvm.experimental.deoptimize`
are lowered to calls to `__llvm_deoptimize`, it is reasonable to enforce
a unique calling convention.
Some of the tests that were breaking this verifier rule have had to be
split up into different .ll files.
The inliner was violating this rule as well, and has been fixed to avoid
producing invalid IR.
llvm-svn: 269261
This is to fix the bug in https://llvm.org/bugs/show_bug.cgi?id=27612.
When spill is hoisted to a BB with landingpad successor, and if the VNI
of the spill reg lives into the landingpad successor, the spill should be
inserted before the call which may throw exception. InsertPointAnalysis
is used to compute the safe insert point.
http://reviews.llvm.org/D20027 is a preparing patch for this patch.
Differential Revision: http://reviews.llvm.org/D19884.
llvm-svn: 269249
For narrow stores (e.g., strb, srth) we know the upper bits of the register are
unused/not useful. In some cases we can use this information to eliminate
unnecessary instructions.
For example, without this patch we generate (from the 2nd test case):
ldr w8, [x0]
and w8, w8, #0xfff0
bfxil w8, w2, #16, #4
strh w8, [x1]
and after the patch the 'and' is removed:
ldr w8, [x0]
bfxil w8, w2, #16, #4
strh w8, [x1]
ret
During the lowering of the bitfield insert instruction the 'and' is eliminated
because we know the upper 16-bits that are masked off are unused and the lower
4-bits that are masked off are overwritten by the insert itself. Therefore, the
'and' is unnecessary.
Differential Revision: http://reviews.llvm.org/D20175
llvm-svn: 269226
SCEVExpander::replaceCongruentIVs assumes the backedge value of an
SCEV-analysable PHI to always be an instruction, when this is not
necessarily true. For now address this by bailing out of the
optimization if the backedge value of the PHI is a non-Instruction.
llvm-svn: 269213
`SCEVExpander::replaceCongruentIVs` bypasses `hoistIVInc` if both the
original and the isomorphic increments are PHI nodes. Doing this can
break SSA if the isomorphic increment is not dominated by the original
increment. Get rid of the bypass, and let `hoistIVInc` do the right
thing.
Fixes PR27232 (compile time crash/hang).
llvm-svn: 269212
... for AddRec's in loops for which SCEV is unable to compute a max
tripcount. This is not a problem for "normal" loops[0] that don't have
guards or assumes, but helps in cases where we have guards or assumes in
the loop that can be used to constrain incoming values over the backedge.
This partially fixes PR27691 (we still don't handle the NUW case).
[0]: for "normal" loops, in the cases where we'd be able to prove
no-wrap via isKnownPredicate, we'd also be able to compute a max
tripcount.
llvm-svn: 269211
Equivalent GEP indices with different types are treated as different
indices altogether, leading to an incorrect AA result. Fix the issue
by comparing indices based on their values.
Thanks to Mikael Holmén for reporting the issue!
Differential Revision: http://reviews.llvm.org/D19935
llvm-svn: 269197
microMIPS has a special case that is not correctly implemented in LLVM. If we
have a symbol 'foo' which is equivalent to '.text+0x10'. The value of an
R_MICROMIPS_LO16 relocation using 'foo' is 'foo+0x11' and not 'foo+0x10'. The
in-place addend should therefore be 0x11.
Work around this by partially reverting the effect of r268900 by keeping the
symbol when the STO_MIPS_MICROMIPS flag is set. This fixes
SingleSource/Regression/C/PR640 for microMIPS.
llvm-svn: 269196
When generating .cfi_offset instructions, make sure that the offset is
calculated with respect to the register used to define the CFA (which is
currently always FP+8).
llvm-svn: 269191
Summary:
r268058 unintentionally made the retrieval of the current assembler temporary
unconditional. This was fine for the existing tests but it broke the cases
where the assembler temporary is not needed (N32/N64 or not PIC) and is
unavailable due to a '.set noat' directive.
This fixes FreeBSD's libc.
Reviewers: emaste, sdardis, seanbruno
Subscribers: dsanders, emaste, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20093
llvm-svn: 269179
Summary: When emitting comparison for fp16, in addition to promote the LHS and RHS to fp32, we need to change the VT as well.
Reviewers: t.p.northover
Subscribers: t.p.northover, aemerson, rengolin, llvm-commits
Differential Revision: http://reviews.llvm.org/D19922
llvm-svn: 269151
Summary: In sample profile, some branches may have profile missing due to profile inaccuracy. We want existing branch probability still valid after propagation.
Reviewers: hfinkel, davidxl, spatel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19948
llvm-svn: 269137
Unlike xN/wN, the size of vN is genuinely ambiguous in the assembly, so we
should try to infer what was intended from the type. But only down to 64-bits
(vN can never represent sN, hN or bN).
llvm-svn: 269132
Sort of the BB-local equivalent to idiom-recognizer: if we have a basic-block
that really implements a memcpy operation, backends can benefit from seeing
this.
llvm-svn: 269125
Before r268509, Clang would disable the loop unroll pass when optimizing
for size. That commit enabled it to be able to support unroll pragmas
in -Os builds. However, this regressed binary size in one of Chromium's
DLLs with ~100 KB.
This restores the original behaviour of no unrolling at -Os, but doing it
in LLVM instead of Clang makes more sense, and also allows the pragmas to
keep working.
Differential revision: http://reviews.llvm.org/D20115
llvm-svn: 269124
This patch extend loopreroll to allow the instruction chain
of loop control only IV has sext.
Differential Revision: http://reviews.llvm.org/D19820
llvm-svn: 269121
This fixes a bug introduced in r267623, where we got smarter and avoided to save
EAX before using it. However, we failed to check if any of the subregister of
EAX were alive and thus, missed cases where we have to save EAX before using it.
The problem may happen on every X86/i386/... platform.
This fixes llvm.org/PR27624
llvm-svn: 269115
Do simplifications common to all shift instructions based on the amount shifted:
1. If the shift amount is known larger than the bitwidth, the result is undefined.
2. If the valid bits of the shift amount are all known to be 0, it's a shift by zero, so the shift operand is the result.
Note that we could generalize the shift-by-zero transform into a shift-by-constant if all of the valid bits in the shift
amount are known, but that would have to be done in InstCombine rather than here because it would mean we need to create
a new shift instruction.
Differential Revision: http://reviews.llvm.org/D19874
llvm-svn: 269114
Added support for extended mnemonics for the following branch instructions and
load/store-on-condition opcodes:
BR, LOCR, LOCGR, LOC, LOCG, STOC, STOCG
Phabricator: http://reviews.llvm.org/D19729
Committing on behalf of Zhan Liau
llvm-svn: 269106
for the same subprogram.
This fixes a bug where DW_AT_abstract_origin is being emitted twice for
the same subprogram if a function is both inlined and emitted in the same
translation unit, by restoring the pre-r266446 behavior.
http://reviews.llvm.org/D20072
llvm-svn: 269103
I'm really not sure why we were in the first place, it's the linker's job to
convert between BL/BLX as necessary. Even worse, using BLX left Thumb calls
that could be locally resolved completely unencodable since all offsets to BLX
are multiples of 4.
rdar://26182344
llvm-svn: 269101
The LoopPassManager needs to calculate the loops analysis in order to
iterate over the loops at all. Requiring it is redundant and just adds
noise to the RUN lines here.
llvm-svn: 269097
An oddity of the .ll syntax is that the "@var = " in
@var = global i32 42
is optional. Writing just
global i32 42
is equivalent to
@0 = global i32 42
This means that there is a pretty big First set at the top level. The
current implementation maintains it manually. I was trying to refactor
it, but then started wondering why keep it a all. I personally find the
above syntax confusing. It looks like something is missing.
This patch removes the feature and simplifies the parser.
llvm-svn: 269096
This patch extend loopreroll to allow the instruction chain
of loop control only IV has sext.
Differential Revision: http://reviews.llvm.org/D19820
llvm-svn: 269093
Summary:
While setting kill flags on instructions inside a BUNDLE, we bail out as soon
as we set kill flag on a register. But we are missing a check when all the
registers already have the correct kill flag set. We need to bail out in that
case as well.
This patch refactors the old code and simply makes use of the addRegisterKilled
function in MachineInstr.cpp in order to determine whether to set/remove kill
on an instruction.
Reviewers: apazos, t.p.northover, pete, MatzeB
Subscribers: MatzeB, davide, llvm-commits
Differential Revision: http://reviews.llvm.org/D17356
llvm-svn: 269092
An example from Hexagon where things went wrong:
%R0<def> = L2_loadrigp <ga:@fp04> ; load function address
J2_callr %R0<kill>, ..., %R0<imp-def> ; call *R0, return value in R0
ScheduleDAGInstrs::buildSchedGraph would visit all instructions going
backwards, and in each instruction it would visit all operands in their
order on the operand list. In the case of this call, it visited the use
of R0 first, then removed it from the set Uses after it visited the def.
This caused the DAG to be missing the data dependence edge on R0 between
the load and the call.
Differential Revision: http://reviews.llvm.org/D20102
llvm-svn: 269076
Currently, SelectionDAG assumes 8/16-bit cmpxchg returns either a sign
extended result, or a zero extended result. SystemZ takes a third
option by returning junk in the high bits (rotated contents of the other
bytes in the memory word). In that case, don't use Assert*ext, and
zero-extend the result ourselves if a comparison is needed.
Differential Revision: http://reviews.llvm.org/D19800
llvm-svn: 269075
Summary:
Add support for emission of plaintext lists of the imported files for
each distributed backend compilation. Used for distributed build file
staging.
Invoked with new gold-plugin thinlto-emit-imports-files option, which is
only valid with thinlto-index-only (i.e. for distributed builds), or
from llvm-lto with new -thinlto-action=emitimports value.
Depends on D19556.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19636
llvm-svn: 269067
This restores commit r268627:
Summary:
When launching ThinLTO backends in a distributed build (currently
supported in gold via the thinlto-index-only plugin option), emit
an individual index file for each backend process as described here:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098272.html
...
Differential Revision: http://reviews.llvm.org/D19556
Address msan failures by avoiding std::prev on map.end(), the
theory is that this is causing issues due to some known UB problems
in __tree.
llvm-svn: 269059
Following post-commit comments on r268900 from Rafael Espindola:
The missing relocations are now explicitly listed in the switch statement with
appropriate FIXME comments and the default path is now unreachable. The
temporary exception to this is that compound relocations for N64 still have a
default path that returns true. This is because fixing that case ought to be a
separate patch.
Also make R_MIPS_NONE return false since it has no effect on the section data.
llvm-svn: 269047
Loop rotation clones instruction from the old header into the preheader. If
there were uses of values produced by these instructions that were outside
the loop, we have to insert PHI nodes to merge the two values. If the values
are used by DbgIntrinsics they will be used as a MetadataAsValue of a
ValueAsMetadata of the original values, and iterating all of the uses of the
original value will not update the DbgIntrinsics. The new code checks if the
values are used by DbgIntrinsics and if so, updates them using essentially
the same logic as the original code.
The attached testcase demonstrates the issue. Without the fix, the
DbgIntrinic outside the loop uses values computed inside the loop, even
though these values do not dominate the DbgIntrinsic.
Author: Thomas Jablin (tjablin)
Reviewers: dblaikie aprantl kbarton hfinkel cycheng
http://reviews.llvm.org/D19564
llvm-svn: 269034
When a va_start or va_copy is immediately followed by a va_end (ignoring
debug information or other start/end in between), then it is safe to
remove the pair. As this code shares some commonalities with the lifetime
markers, this has been factored to helper functions.
This InstCombine pattern kicks-in 3 times when running the LLVM test
suite.
llvm-svn: 269033
Added test to check LeonItineraries are being applied by code checked-in two weeks ago in r267121.
Phabricator Review: http://reviews.llvm.org/D19359
llvm-svn: 269032
This reverts commits r268969, r268979 and r268984. They had target specific test
in generic directories without the correct specifiers and made it hard for us to
come up with a good solution by rapidly committing untested changes.
This test needs to be in a target specific directory or have the correct REQUIRED
identifier.
llvm-svn: 269027
SystemZ (and probably other targets as well) can fold a memory operand
by changing the opcode into a new instruction that as a side-effect
also clobbers the CC-reg.
In order to do this, liveness of that reg must first be checked. When
LIS is passed, getRegUnit() can be called on it and the right
LiveRange is computed on demand.
Reviewed by Matthias Braun.
http://reviews.llvm.org/D19861
llvm-svn: 269026
Allow vectorization when the step is a loop-invariant variable.
This is the loop example that is getting vectorized after the patch:
int int_inc;
int bar(int init, int *restrict A, int N) {
int x = init;
for (int i=0;i<N;i++){
A[i] = x;
x += int_inc;
}
return x;
}
"x" is an induction variable with *loop-invariant* step.
But it is not a primary induction. Primary induction variable with non-constant step is not handled yet.
Differential Revision: http://reviews.llvm.org/D19258
llvm-svn: 269023
We now use LiveRangeCalc::extendToUses() instead of a specially designed
algorithm in constructMainRangeFromSubranges():
- The original motivation for constructMainRangeFromSubranges() were
differences between the main liverange and subranges because of hidden
dead definitions. This case however cannot happen anymore with the
DetectDeadLaneMasks pass in place.
- It simplifies the code.
- This fixes a longstanding bug where we did not properly create new SSA
values on merging control flow (the MachineVerifier missed most of
these cases).
- Move constructMainRangeFromSubranges() to LiveIntervalAnalysis and
LiveRangeCalc to better match the implementation/available helper
functions.
llvm-svn: 269016
Move the register stackification and coloring passes to run very late, after
PEI, tail duplication, and most other passes. This means that all code emitted
and expanded by those passes is now exposed to these passes. This also
eliminates the need for prologue/epilogue code to be manually stackified,
which significantly simplifies the code.
This does require running LiveIntervals a second time. It's useful to think
of these late passes not as late optimization passes, but as a domain-specific
compression algorithm based on knowledge of liveness information. It's used to
compress the code after all conventional optimizations are complete, which is
why it uses LiveIntervals at a phase when actual optimization passes don't
typically need it.
Differential Revision: http://reviews.llvm.org/D20075
llvm-svn: 269012
Summary:
The idea is very close to what we do for assume intrinsics: we mark the
guard intrinsics as writing to arbitrary memory to maintain control
dependence, but under the covers we teach AA that they do not mod any
particular memory location.
Reviewers: chandlerc, hfinkel, gbiv, reames
Subscribers: george.burgess.iv, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19575
llvm-svn: 269007
We now construct a custom pass pipeline instead of injecting
start-before/stop-after into the default pipeline construction. This
allows to specify any pass known to the pass registry. Previously
specifying indirectly added analysis passes or passes not added to the
pipeline add all would not be added and we would silently do nothing.
This also restricts the -run-pass option to cases with .mir input.
llvm-svn: 269003
When loading or storing AVX512 registers we were not using the AVX512
variant of the load and store for VR128 and VR256 like registers.
Thus, we ended up with the wrong encoding and actually were dropping the
high bits of the instruction. The result was that we load or store the
wrong register. The effect is visible only when we emit the object file
directly and disassemble it. Then, the output of the disassembler does
not match the assembly input.
This is related to llvm.org/PR27481.
llvm-svn: 269001
We can use calls to @llvm.experimental.guard to prove predicates,
relying on the fact that in all locations domianted by a call to
@llvm.experimental.guard the predicate it is guarding is known to be
true.
llvm-svn: 268997
When we encounter unsafe memory dependencies, loop distribution could
help.
Even though, the diagnostics is in LAA, it's only currently emitted in
the vectorizer.
llvm-svn: 268987
We used to list registers that were not in the AVX space. In other
words, we were pushing registers that the ISA cannot encode
(YMM16-YMM31).
This is part of llvm.org/PR27481.
llvm-svn: 268983
As discussed on PR24888, until SSE42 we don't have access to PCMPGTQ for v2i64 comparisons, but the cost models don't reflect this, resulting in over-optimistic vectorizaton.
This patch adds SSE2 'base level' costs that match what a typical target is capable of and only reduces the v2i64 costs at SSE42.
Technically SSE41 provides a PCMPEQQ v2i64 equality test, but as getCmpSelInstrCost doesn't give us a way to discriminate between comparison test types we can't easily make use of this, otherwise we could split the cost of integer equality and greater-than tests to give better costings of each.
Differential Revision: http://reviews.llvm.org/D20057
llvm-svn: 268972
IR instrumentation generates a COMDAT symbol __llvm_profile_raw_version to
overwrite the same symbol in profile run-time to distinguish IR profiles from
Clang generated profiles. In MACHO, LinkOnceODR linkage is used due to the
lack of COMDAT support.
But LinkOnceODR linkage might have .weak_def_can_be_hidden assembly directive,
while the weak variable in run-time has a .weak_definition directive. Linker
will not merge these two symbols even they have the same name. The end result
is IR profiles are not properly flagged in MACHO.
This patch changes the linkage for __llvm_profile_raw_version in each module to
LinkOnceAny so that it has same .weak_definition directive as in the run-time.
Differential Revision: http://reviews.llvm.org/D20078
llvm-svn: 268969
This fixes http://llvm.org/PR27646 on AArch64.
There are three issues here:
- The GR save area is 7 words in size, instead of 8. This is not enough
if none of the fixed arguments is passed in GRs (they're all floats or
aggregates).
- The first argument is ignored (which counteracts the above if it's passed
in GR).
- Like x86_64, fixed arguments landing in the overflow area are wrongly
counted towards the overflow offset.
Differential Revision: http://reviews.llvm.org/D20023
llvm-svn: 268967
This patch introduces a new option -lto-strip-invalid-debug-info, which
drops malformed debug info from the input.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
rdar://problem/25818489
http://reviews.llvm.org/D19987
This reapplies 268936 with a test case fix for Linux (-exported-symbol foo)
llvm-svn: 268965
This reapplies commit r268796, with a fix for the setting of the inline asm
constraints. I.e., "mark" LOW32_ADDR_ACCESS_RBP as a GR variant, so that the
regular processing of the GR operands (setting of the subregisters) happens.
Original commit log:
[X86] Add a new LOW32_ADDR_ACCESS_RBP register class.
ABIs like NaCl uses 32-bit addresses but have 64-bit frame.
The new register class reflects those constraints when choosing a
register class for a address access.
llvm-svn: 268955
This patch corresponds to review:
http://reviews.llvm.org/D19683
Simply adds the bits for being able to specify -mcpu=pwr9 to the back end.
llvm-svn: 268950
This patch introduces a new option -lto-strip-invalid-debug-info, which
drops malformed debug info from the input.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
rdar://problem/25818489
http://reviews.llvm.org/D19987
llvm-svn: 268936