While here move simplifyLoop() function to the new header, as
suggested by Chandler in the review.
Differential Revision: http://reviews.llvm.org/D21404
llvm-svn: 274959
This removes a few fields from the graph builder by making us compute
things (that we'd always compute anyway) more eagerly.
Patch by Jia Chen.
Differential Revision: http://reviews.llvm.org/D22009
llvm-svn: 274957
Drive-by improvement: We would 1) add CSRs, 2) remove callee saved CSRs
and 3) add all CSRs again for the return block. Just adding CSRs once
obviously gives the same results.
llvm-svn: 274955
An identity COPY like this:
%AL = COPY %AL, %EAX<imp-def>
has no semantic effect, but encodes liveness information: Further users
of %EAX only depend on this instruction even though it does not define
the full register.
Replace the COPY with a KILL instruction in those cases to maintain this
liveness information. (This reverts a small part of r238588 but this
time adds a comment explaining why a KILL instruction is useful).
llvm-svn: 274952
Summary:
This way the metadata will be only generated when asserts enabled,
or when -enable-import-metadata specified
FIXED missing colon on requires.
Reviewers: tejohnson, eraman, mehdi_amini
Subscribers: mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D22167
llvm-svn: 274947
Our assertions in WinCOFFStreamer had unexpected side effects resulting
in symbols getting unexpectedly marked as used.
This fixes PR28462.
llvm-svn: 274941
Summary:
This way the metadata will be only generated when asserts enabled,
or when -enable-import-metadata specified
Reviewers: tejohnson, eraman, mehdi_amini
Subscribers: mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D22167
llvm-svn: 274938
Avoid implicit conversions from MachineInstrBundleIIterator to
MachineInstr* in the MSP430 backend by preferring MachineInstr& over
MachineInstr* when a pointer isn't nullable.
llvm-svn: 274933
This isn't a sure thing (are 2 extra bitcasts less expensive than a logic op?),
but we'll try to err on the conservative side by going with the case that has
less IR instructions.
Note: This question came up in http://reviews.llvm.org/D22114 , but this part is
independent of that patch proposal, so I'm making this small change ahead of that
one.
See also:
http://reviews.llvm.org/rL274926
llvm-svn: 274932
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the NVPTX backend, mainly by preferring MachineInstr&
over MachineInstr* when a pointer isn't nullable and using range-based
for loops.
There was one piece of questionable code in
NVPTXInstrInfo::AnalyzeBranch, where a condition checked a pointer
converted from an iterator for nullptr. Since this case is impossible
(moreover, the code above guarantees that the iterator is valid), I
removed the check when I changed the pointer to a reference.
Despite that case, there should be no functionality change here.
llvm-svn: 274931
Because isReallyTriviallyReMaterializableGeneric puts many limits on
rematerializable instructions, this fix can prevent instructions with
tied virtual operands and instructions with virtual register uses from
being kept in DeadRemat, so as to workaround the live interval consistency
problem for the dummy instructions kept in DeadRemat.
But we still need to fix the live interval consistency problem. This patch
is just a short time relieve. PR28464 has been filed as a reminder.
Differential Revision: http://reviews.llvm.org/D19486
llvm-svn: 274928
Avoid implicit conversions from MachineInstrBundleInstr to MachineInstr*
in the AArch64 backend, mainly by preferring MachineInstr& over
MachineInstr* when a pointer isn't nullable.
llvm-svn: 274924
Remove remaining implicit conversions from MachineInstrBundleIterator to
MachineInstr* from the ARM backend. In most cases, I made them less attractive
by preferring MachineInstr& or using a ranged-based for loop.
Once all the backends are fixed I'll make the operator explicit so that this
doesn't bitrot back.
llvm-svn: 274920
Summary: As we will move to use uniformed hotness check in inliner, we do not need inline hints in SampleProfile pass any more.
Reviewers: dnovillo, davidxl
Subscribers: eraman, llvm-commits
Differential Revision: http://reviews.llvm.org/D19287
llvm-svn: 274918
Avoid implicit conversions from MachineInstrBundleIterator to
MachineInstr* in the WebAssembly backend by preferring MachineInstr&
over MachineInstr*.
llvm-svn: 274912
Remove remaining implicit conversions from MachineInstrBundleIterator to
MachineInstr* from the AMDGPU backend. In most cases, I made them less
attractive by preferring MachineInstr& or using a ranged-based for loop.
Once all the backends are fixed I'll make the operator explicit so that
this doesn't bitrot back.
llvm-svn: 274906
This should be slightly more efficient and could avoid spurious overdefined
markings, as Eli pointed out.
Differential Revision: http://reviews.llvm.org/D22122
llvm-svn: 274905
Change a while loop that was checking for nullptr on an
iterator-to-pointer conversion to an infinite for loop. Now it's clear
that the condition doesn't terminate.
The only change in behaviour is if an invalid iterator (holding nullptr)
was passed into AMDGPUCFGStructurizer::reversePredicateSetter. There
are only two callers, and they both dereference the iterator before
sending it in, so rather than adding an early return to avoid the loop
I've just asserted (using a static_cast, to avoid an implicit conversion
to pointer).
llvm-svn: 274902
Stop using an implicit conversion from the return of
MachineBasicBlock::getFirstTerminator to MachineInstr*. In two cases,
directly dereference to a MachineInstr& since later code assumes it's
valid. In a third case, change to an iterator since later code checks
against MachineBasicBlock::end.
Although the fix for the third case avoids undefined behaviour, I expect
this doesn't cause a functionality change in practice (since the basic
block already has a terminator).
llvm-svn: 274898
Mostly through preferring MachineInstr&, avoid implicit conversions from
iterator to pointer.
Although this may bitrot (since there are other uses blocking me from
removing the implicit operator), this removes the last of the implicit
conversions from MachineInstrBundleIterator to MachineInstr* in the
LLVMCodeGen build target.
llvm-svn: 274893
Since these are named nvvm_* rather than nvptx_*, we also need to
update getArchTypePrefix. It's a bit unusual for getArchTypePrefix not
to match the backend name, but I think this fits the intent of the
function in this case.
llvm-svn: 274890
The commit message is inaccurate, modifiesRegister
will check for partial defs of exec.
We currently don't ever emit partial defs of exec,
so it doesn't really matter.
llvm-svn: 274886
Summary: Branch off the work to add support for the .word directive,
using addAliasForDirective.
Reviewers: koriakin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22142
llvm-svn: 274878
The createRegAllocPass reads and writes to a global variable 'Registry'
via calls to getDefault and setDefault. Run this under a call_once to
avoid races.
llvm-svn: 274875
Errata fixes for various errata in different versions of the Leon variants of the Sparc 32 bit processor.
The nature of the errata are listed in the comments preceding the errata fix passes. Relevant unit tests are implemented for each of these.
Note: Running clang-format has changed a few other lines too, unrelated to the implemented errata fixes. These have been left in as this keeps the code formatting consistent.
Differential Revision: http://reviews.llvm.org/D21960
llvm-svn: 274856
We can fold truncs whose operand feeds from a load, if the trunc value
is available through a prior load/store.
This change is from: http://reviews.llvm.org/D21246, which folded the
trunc but missed the bitcast or ptrtoint/inttoptr required in the RAUW
call, when the load type didnt match the prior load/store type.
Differential Revision: http://reviews.llvm.org/D21791
llvm-svn: 274853
As a result, the urem instruction will not be expanded to a sequence of umull,
lsrs, muls and sub instructions, but just a call to __aeabi_uidivmod.
Differential Revision: http://reviews.llvm.org/D22131
llvm-svn: 274843
Support for the macro fusion of simple ALU ops with branches for the Vulcan sub-target.
Patch by Meador Inge <meadori@gmail.com>
Differential Revision: http://reviews.llvm.org/D22042
llvm-svn: 274837
Until we have a better way to extract constants through bitcasted build vectors (and how to handle undefs of partial lanes etc.) at least accept build vectors that are all zeroes.
llvm-svn: 274833
I have an LTO snapshot (for which I don't have sources) that can't
be read back by LLVM. It seems the writer emitted broken bitcode
and this assertions aims at catching such cases.
llvm-svn: 274819
Windows on ARM uses a pure thumb-2 environment. This means that it can select a
high register when doing a __builtin_longjmp. We would use a tLDRi which would
truncate the register to a low register. Use a t2LDRi12 to get the full
register file access. Tweak the code to just load into PC, as that is an
interworking branch on all supported cores anyways.
llvm-svn: 274815
Summary:
* Similiar to the ARM backend yse the peephole optimizer to generate more conditional ALU operations;
* Add predicated type with default always true to RR instructions in LanaiInstrInfo.td;
* Move LanaiSetflagAluCombiner into optimizeCompare;
* The ASM parser can currently only handle explicitly specified CC, so specify ".t" (true) where needed in the ASM test;
* Remove unused MachineOperand flags;
Reviewers: eliben
Subscribers: aemerson
Differential Revision: http://reviews.llvm.org/D22072
llvm-svn: 274807
xorl + setcc is generally the preferred sequence due to the partial register
stall setcc + movzbl suffers from. As a bonus, it also encodes one byte smaller.
This fixes PR28146.
The original commit tried inserting an 8bit-subreg into a GR32 (not GR32_ABCD)
which was not appreciated by fast regalloc on 32-bit.
llvm-svn: 274802
GCOVProfiler::emitProfileArcs() can create many variables with names
starting with "__llvm_gcov_ctr", so llvm appends a numeric suffix to
most of them. Teach tsan about this.
llvm-svn: 274801
We can remove dead stores in the presence of fence instructions. Fence
does not change an otherwise thread local store to visible.
reviewers: reames, dexonsmith, jfb
Differential Revision: http://reviews.llvm.org/D22001
llvm-svn: 274795
The commit reinstates r273279, which was informally approved.
Original Review: http://reviews.llvm.org/D21414
This reverts commit ca632c91aaa7cafc50942f890c49f727a046ace1.
llvm-svn: 274790
We currently do not touch a symbol's linkage in the case where a definition
has a single copy. However, this code is effectively unnecessary: either
the definition is not exported, in which case the internalize phase sets
its linkage to internal, or it is exported, in which case we need to promote
linkage to weak. Those two cases are already handled by existing code.
I believe that the only real functional change here is in the case where we
have a single definition which does not prevail (e.g. because the definition
in a native object file prevails). In that case we now lower linkage to
available_externally following the existing code path for that case.
As a result we can remove the isExported function parameter from the
thinLTOResolveWeakForLinkerInIndex function.
Differential Revision: http://reviews.llvm.org/D21883
llvm-svn: 274784
``afl_driver.cpp`` currently relies on weak symbols which doesn't
work properly under macOS. For now fix the build by providing a
dummy implementation of ``LLVMFuzzerInitialize(...)``. This is just
a temporary measure until we fix ``afl_driver.cpp`` for macOS.
llvm-svn: 274778
- Rename the ptx.read.* intrinsics to nvvm.read.ptx.sreg.* - some but
not all of these registers were already accessible via the nvvm
name.
- Rename ptx.bar.sync nvvm.bar.sync, to match nvvm.bar0.
There's a fair amount of code motion here, but it's all very
mechanical.
llvm-svn: 274769
Summary:
A regression showed up in node.js when handling conditional calls.
Fix the regression by recognizing external symbols as a possible
operand type in CallJG.
Reviewers: koriakin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22054
llvm-svn: 274761
This is a follow-up for r273544.
The end goal is to get rid of the isSwift / isCortexXY / isWhatever methods.
This commit also removes a command line flag that isn't used in any of the tests:
check-vmlx-hazards. It can be replaced easily with the mattr mechanism, since
this is now a subtarget feature.
There is still some work left regarding FeatureExpandMLx. In the past MLx
expansion was enabled for subtargets with hasVFP2(), until r129775 [1] switched
from that to isCortexA9, without too much justification.
In spite of that, the code performing MLx expansion still contains calls to
isSwift/isLikeA9, although the results of those are pretty clear given that
we're only enabling it for the A9.
We should try to enable it for all targets that have FeatureHasVMLxHazards, as
it seems to be closely related to that behaviour, and if that is possible try to
clean up the MLx expansion pass from all calls to isWhatever. This will require
some performance testing, so it will be done in another patch.
[1] http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20110418/119725.html
Differential Revision: http://reviews.llvm.org/D21798
llvm-svn: 274742
friend definitions.
Based on the experiments Sean Silva and Reid did, this seems the safest
course of action and also will work around a questionable warning
provided by GCC6 on the old form of the code. Thanks for Davide pointing
out the issue and other suggesting ways to fix.
llvm-svn: 274740
Summary:
Adds option -esan-aux-field-info to control generating binary with
auxiliary struct field information.
Extracts code for creating auxiliary information from
createCacheFragInfoGV into createCacheFragAuxGV.
Adds test struct_field_small.ll for -esan-aux-field-info test.
Reviewers: aizatsky
Subscribers: llvm-commits, bruening, eugenis, kcc, zhaoqin, vitalybuka
Differential Revision: http://reviews.llvm.org/D22019
llvm-svn: 274726
This check is not only unnecessary, it can produce the wrong result. If we
are linking a single module and it has an exported linkonce symbol, we need
to promote to weak in order to avoid PR19901-style problems.
Differential Revision: http://reviews.llvm.org/D21917
llvm-svn: 274722
By replacing dyn_cast of ConstantInt with m_Zero/m_One/m_AllOnes, we
allow these transforms for splat vectors.
Differential Revision: http://reviews.llvm.org/D21899
llvm-svn: 274696
xorl + setcc is generally the preferred sequence due to the partial register
stall setcc + movzbl suffers from. As a bonus, it also encodes one byte smaller.
This fixes PR28146.
Differential Revision: http://reviews.llvm.org/D21774
llvm-svn: 274692
On CPUs with the zero cycle zeroing feature enabled "movi v.2d" should
be used to zero a vector register. This was previously done at
instruction selection time, however the register coalescer sometimes
widened multiple vregs to the Q width because of that leading to extra
spills. This patch leaves the decision on how to zero a register to the
AsmPrinter phase where it doesn't affect register allocation anymore.
This patch also sets isAsCheapAsAMove=1 on FMOVS0, FMOVD0.
This fixes http://llvm.org/PR27454, rdar://25866262
Differential Revision: http://reviews.llvm.org/D21826
llvm-svn: 274686
findScratchNonCalleeSaveRegister() just needs a simple liveness
analysis, use LivePhysRegs for that as it is simpler and does not depend
on the kill flags.
This commit adds a convenience function available() to LivePhysRegs:
This function returns true if the given register is not reserved and
neither the register nor any of its aliases are alive.
Differential Revision: http://reviews.llvm.org/D21865
llvm-svn: 274685
Follow-up from r274465: we don't need to capture the value in these cases,
so just match the constant that we're looking for. m_One/m_Zero work with
vector splats as well as scalars.
llvm-svn: 274670
Added metadata to be able to make statistics on how many functions
that have been imported have been removed. Also module name might
be helpfull when debugging.
Reviewers: tejohnson, eraman
Subscribers: mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D21943
llvm-svn: 274668
Summary:
Fixes an incorrect assert that fails on 128-bit-sized loads or stores.
Augments the wset tests to include this case.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D22062
llvm-svn: 274666
Now with a corrected test to account for a recently supported properties bit in the debug info of a struct.
Original review: http://reviews.llvm.org/D21939
This reverts commit 970c3fd497a28d25dd69526eb52594a696c37968.
llvm-svn: 274661
The dse_with_dbg_value.ll test committed with r273141 is removed because this
we no longer performs any type of back tracking, which is what was causing the
codegen differences with and without debug information.
Differential Revision: http://reviews.llvm.org/D21613
llvm-svn: 274660
This is "cvtdq2ps" which does not appear to be particularly slow on any CPU
according to Agner's tables. Choosing "5" as a cost here as suggested in:
https://llvm.org/bugs/show_bug.cgi?id=21356
...but it seems very conservative given that the instruction is fully pipelined,
and I think these costs are supposed to model throughput.
Note that related costs are also most likely too high, but this fixes PR21356
and partly fixes PR28434.
llvm-svn: 274658
We were still crashing in the "no change" case because LVI was not
getting invalidated.
See the thread "Should analyses be able to hold AssertingVH to IR?
(related to PR28400)" for more discussion.
llvm-svn: 274656
This logic was introduced in r157663 and does not make any sense to me.
The motivating example in rdar://11538365 looks like this:
This is the tail:
BB#16: derived from LLVM BB %if.end68
Live Ins: %R0 %R4 %R5
Predecessors according to CFG: BB#15 BB#5
tBLXi pred:14, pred:%noreg, <ga:@CFRelease>, %R0<kill>, <regmask>, %LR<imp-def,dead>, %SP<imp-use>, %SP<imp-def>
t2B <BB#20>, pred:14, pred:%noreg
Successors according to CFG: BB#20
This is the predBB:
BB#5:
Live Ins: %R5
Predecessors according to CFG: BB#4
%R4<def> = t2MOVi 0, pred:14, pred:%noreg, opt:%noreg
t2B <BB#16>, pred:14, pred:%noreg
Successors according to CFG: BB#16
However this is invalid machine code to begin with, if %R0 is live-in to
BB#16 then it must be live-in to BB#5 as well if BB#5 does not define
it. We should not need logic to retroactively fix broken machine code
and in fact the example from r157663 passes cleanly with the code
removed and I do not see any (newly) failing tests with the machine
verifier enabled.
Differential Revision: http://reviews.llvm.org/D22031
llvm-svn: 274655
Cast cost tables are now sorted, for each cast type, lexicographically on
[source base type, source vector width, dest base type, base vector width].
llvm-svn: 274653
On SystemZ, shift and rotate instructions only use the bottom 6 bits of the shift/rotate amount.
Therefore, if the amount is ANDed with an immediate mask that has all of the bottom 6 bits set, we
can remove the AND operation entirely.
Differential Revision: http://reviews.llvm.org/D21854
llvm-svn: 274650
We were checking for 2 insertions (which is caught earlier in the pattern matching loop) instead of the case where we have no insertions.
Turns out this code never fires as we always try to lower to insertps after trying to lower to blendps, which would catch these cases - I'm about to make some changes to support combining to insertps which could cause this to fire so I don't want to remove it.
llvm-svn: 274648