The intrinsic legalization for masked truncate uses ISD::TRUNCATE which can be constant folded by getNode. This prevents getVectorMaskingNode from seeing the ISD::TRUNCATE special case where it should emit X86ISD::SELECT instead of ISD::VSELECT. This causes a vselect with a v16i1 or v8i1 condition to be emitted during vector legalization. but vector legalization doesn't revisit nodes it creates. DAG combine will then promote this condition to match the result type. Then op legalization will try to legalize it, but the custom lowering hook returned SDValue(). But op legalization doesn't have an Expand for VSELECT because it expects vector legalization to have taken care of it. So the operation sticks around and fails in isel.
This patch adds a custom legalization hook to morph it to a vXi8 vselect instead.
This also simplifies the normal vXi16 vselect handling because vector legalization was normally expanding to AND/ANDN/OR and DAG combine was turning that into VBLENDVB. So we can skip a step by doing it directly.
Fixes PR37499
Differential Revision: https://reviews.llvm.org/D47025
llvm-svn: 332743
This is a revert of the changes from https://reviews.llvm.org/D46265;
the new test introduced (test/CodeGen/X86/PR37310.mir) causes buildbot
failures.
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D47061
llvm-svn: 332742
Avoid requirement that number of values must be known at assembler
time.
Fixes PR33586.
Reviewers: rnk, peter.smith, echristo, jyknight
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D46703
llvm-svn: 332741
This patch adds a remark which tells the user when a pass changes the number of
IR instructions in a module.
It can be enabled by using -Rpass-analysis=size-info.
The point of this is to make it easier to collect statistics on how passes
modify programs in terms of code size. This is similar in concept to timing
reports, but using a remark-based interface makes it easy to diff changes over
multiple compilations of the same program.
By adding functionality like this, we can see
* Which passes impact code size the most
* How passes impact code size at different optimization levels
* Which pass might have contributed the most to an overall code size
regression
The patch lives in the legacy pass manager, but since it's simply emitting
remarks, it shouldn't be too difficult to adapt the functionality to the new
pass manager as well. This can also be adapted to handle MachineInstr counts in
code gen passes.
https://reviews.llvm.org/D38768
llvm-svn: 332739
Retag some instructions that were missed when we split off vector load/store/moves - MOVQ/MOVD etc.
Fixes BtVer2/SLM which have different behaviours for GPR stores.
llvm-svn: 332718
Retag some instructions that were missed when we split off vector load/store/moves - MOVSS/MOVSD/MOVHPD/MOVHPD/MOVLPD/MOVLPS etc.
Fixes BtVer2/SLM which have different behaviours for GPR stores.
llvm-svn: 332714
Summary:
Avoid assert/crash during liveness calculation in situations where the
incoming machine function has statically unreachable BBs.
Fixes PR37130.
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46265
llvm-svn: 332707
Sorry, the commit comment for r332703 is completely broken.
My mind slipped - the right description would be:
In SystemZDAGToDAGISel::Select(), in the handling for SELECT_CCMASK:
Check if UpdateNodeOperands() returns a different SDNode and in that
case call ReplaceNode.
Review: Ulrich Weigand.
llvm-svn: 332706
This patch aims to match the changes introduced in gcc by
https://gcc.gnu.org/ml/gcc-cvs/2018-04/msg00534.html. The
IBT feature definition is removed, with the IBT instructions
being freely available on all X86 targets. The shadow stack
instructions are also being made freely available, and the
use of all these CET instructions is controlled by the module
flags derived from the -fcf-protection clang option. The hasSHSTK
option remains since clang uses it to determine availability of
shadow stack instruction intrinsics, but it is no longer directly used.
Comes with a clang patch (D46881).
Patch by mike.dvoretsky
Differential Revision: https://reviews.llvm.org/D46882
llvm-svn: 332705
Summary:
Fix a case where FoldBranchToCommonDest() would bail out from doing CSE
when encountering a debug intrinsic. Handle that by skipping past the
debug intrinsics.
Also, as a minor refactoring, rename checkCSEInPredecessor() to
tryCSEWithPredecessor() to make it a bit more clear that the function
may remove instructions.
Reviewers: fhahn, craig.topper, dblaikie, xbolva00
Reviewed By: fhahn, xbolva00
Subscribers: vsk, davide, llvm-commits
Differential Revision: https://reviews.llvm.org/D46635
llvm-svn: 332698
For RISCV branch instructions, we need to preserve relocation types when linker
relaxation enabled, so then linker could modify offset when the branch offsets
changed.
We preserve relocation types by define shouldForceRelocation.
IsResolved return by evaluateFixup will always false when shouldForceRelocation
return true. It will make RISCV MC Branch Relaxation always relax 16-bit
branches to 32-bit form, even if the symbol actually could be resolved.
To avoid 16-bit branches always relax to 32-bit form when linker relaxation
enabled, we add a new parameter WasForced to indicate that the symbol actually
couldn't be resolved and not forced by shouldForceRelocation return true.
RISCVAsmBackend::fixupNeedsRelaxationAdvanced could relax branches with
unresolved symbols by (!IsResolved && !WasForced).
RISCV MC Branch Relaxation is needed because RISCV could perform 32-bit
to 16-bit transformation in MC layer.
Differential Revision: https://reviews.llvm.org/D46350
llvm-svn: 332696
CanProveNotTakenFirstIteration utility does not handle the case when
condition of the branch is a constant. Add its handling.
Reviewers: reames, anna, mkazantsev
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46996
llvm-svn: 332695
1. Define Myriad-specific ASan constants.
2. Add code to generate an outer loop that checks that the address is
in DRAM range, and strip the cache bit from the address. The
former is required because Myriad has no memory protection, and it
is up to the instrumentation to range-check before using it to
index into the shadow memory.
3. Do not add an unreachable instruction after the error reporting
function; on Myriad such function may return if the run-time has
not been initialized.
4. Add a test.
Differential Revision: https://reviews.llvm.org/D46451
llvm-svn: 332692
This reapplies commits: r330271, r330592, r330779.
[DEBUG] Initial adaptation of NVPTX target for debug info emission.
Summary:
Patch adds initial emission of the debug info for NVPTX target.
Currently, only .file and .loc directives are emitted, everything else is
commented out to not break the compilation of Cuda.
llvm-svn: 332689
Counting the number of instructions is both unintuitive and inaccurate.
On AArch64, this only affects the generated remarks and certain rare
pseudo-instructions, but it will have a bigger impact on other targets.
Differential Revision: https://reviews.llvm.org/D46921
llvm-svn: 332685
Summary:
The Closure allocated in the main loop is allocated on the stack. However,
later in the code its address is taken (and used for comparisons). This
obviously doesn't work. In fact, the Closure will get the same stack address
during every loop iteration, rendering the check that intended to identify
Closure conflicts entirely ineffective. Fix this bug by giving every Closure
a unique ID and using that for comparison. Alternatively, we could heap
allocate the closure object.
Fixes PR37396
FixesJuliaLang/julia#27032
Reviewers: craig.topper, guyblank
Reviewed By: craig.topper
Subscribers: vchuravy, llvm-commits
Differential Revision: https://reviews.llvm.org/D46800
llvm-svn: 332682
Summary:
We cannot simply delete IMPLICIT_DEF nodes. They may be used
later (e.g. by a PHI) and deleting them will cause later passes (e.g.
LiveVariables) to crash. However, it seems fine to ignore them for
purposes of the domain reassignment (as we do with PHI).
Fixes PR37430
FixesJuliaLang/julia#27080
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D46797
llvm-svn: 332680
Previously we emitted 20-byte SHA1 hashes. This is overkill
for identifying debug info records, and has the negative side
effect of making object files bigger and links slower. By
using only the last 8 bytes of a SHA1, we get smaller object
files and ~10% faster links.
This modifies the format of the .debug$H section by adding a new
value for the hash algorithm field, so that the linker will still
work when its object files have an old format.
Differential Revision: https://reviews.llvm.org/D46855
llvm-svn: 332669
Summary:
- Add wasm personality function
- Re-categorize the existing `isFuncletEHPersonality()` function into
two different functions: `isFuncletEHPersonality()` and
`isScopedEHPersonality(). This becomes necessary as wasm EH uses scoped
EH instructions (catchswitch, catchpad/ret, and cleanuppad/ret) but not
outlined funclets.
- Changed some callsites of `isFuncletEHPersonality()` to
`isScopedEHPersonality()` if they are related to scoped EH IR-level
stuff.
Reviewers: majnemer, dschuff, rnk
Subscribers: jfb, sbc100, jgravelle-google, eraman, JDevlieghere, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D45559
llvm-svn: 332667
notifyFailed method rather than passing in an error generator.
VSO::notifyFailed is responsible for notifying queries that they will not
succeed due to error. In practice the queries don't care about the details
of the failure, just the fact that a failure occurred for some symbols.
Having VSO::notifyFailed take care of this simplifies the interface.
llvm-svn: 332666
The prefix includes type kind, which is important to preserve. Two
different type leafs can easily have the same interior record contents
as another type.
We ran into this issue in PR37492 where a bitfield type record collided
with a const modifier record. Their contents were bitwise identical, but
their kinds were different.
llvm-svn: 332664
Summary:
There was some unfinished work started for offset tracking in CFLGraph by the author of implementation of Andersen algorithm. This work was completed and support for field sensitivity was added to the core of Andersen algorithm.
The performance results seem promising.
SPEC2006 int_base score was increased by 1.1 % (I compared clang 6.0 with clang 6.0 with this patch). The avergae compile time was increased by +- 1 % according my measures with small and medium C/C++ projects (I did not tested it on the large projects with milions of lines of code)
Reviewers: chandlerc, george.burgess.iv, rja
Reviewed By: rja
Subscribers: rja, llvm-commits
Differential Revision: https://reviews.llvm.org/D46282
llvm-svn: 332657
Patch #3 from VPlan Outer Loop Vectorization Patch Series #1
(RFC: http://lists.llvm.org/pipermail/llvm-dev/2017-December/119523.html).
Expected to be NFC for the current inner loop vectorization path. It
introduces the basic algorithm to build the VPlan plain CFG (single-level
CFG, no hierarchical CFG (H-CFG), yet) in the VPlan-native vectorization
path using VPInstructions. It includes:
- VPlanHCFGBuilder: Main class to build the VPlan H-CFG (plain CFG without nested regions, for now).
- VPlanVerifier: Main class with utilities to check the consistency of a H-CFG.
- VPlanBlockUtils: Main class with utilities to manipulate VPBlockBases in VPlan.
Reviewers: rengolin, fhahn, mkuper, mssimpso, a.elovikov, hfinkel, aprantl.
Differential Revision: https://reviews.llvm.org/D44338
llvm-svn: 332654
Summary:
When lowering global address, lower the base as a TargetGlobal first then
create an SDNode for the offset separately and chain it to the address calculation
This optimization will create a DAG where the base address of a global access will
be reused between different access. The offset can later be folded into the immediate
part of the memory access instruction.
With this optimization we generate:
lui a0, %hi(s)
addi a0, a0, %lo(s) ; shared base address.
addi a1, zero, 20 ; 2 instructions per access.
sw a1, 44(a0)
addi a1, zero, 10
sw a1, 8(a0)
addi a1, zero, 30
sw a1, 80(a0)
Instead of:
lui a0, %hi(s+44) ; 3 instructions per access.
addi a1, zero, 20
sw a1, %lo(s+44)(a0)
lui a0, %hi(s+8)
addi a1, zero, 10
sw a1, %lo(s+8)(a0)
lui a0, %hi(s+80)
addi a1, zero, 30
sw a1, %lo(s+80)(a0)
Which will save one instruction per access.
Reviewers: asb, apazos
Reviewed By: asb
Subscribers: rbar, johnrusso, simoncook, jordy.potman.lists, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, mgrang, apazos, asb, llvm-commits
Differential Revision: https://reviews.llvm.org/D46989
llvm-svn: 332641
Summary:
This patch implements MC support for tail psuedo instruction.
A follow-up patch implements the codegen support as well as handling of the indirect tail pseudo instruction.
Reviewers: asb, apazos
Reviewed By: asb
Subscribers: rbar, johnrusso, simoncook, jordy.potman.lists, sabuasal, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, llvm-commits
Differential Revision: https://reviews.llvm.org/D46221
llvm-svn: 332634
Summary:
The current StructurizeCFG pass only works for CFG with one exit. AMDGPUUnifyDivergentExitNodes combines multiple "return" blocks and/or "unreachable" blocks
to one exit block for the Structurizer to work. However, infinite loop is another kind of special "exit", and if we don't handle it, the case of multiple exits will prevent the structurizer from working.
In this work, for each infinite loop, we add a dummy edge to the "return" block, and thus the AMDGPUUnifyDivergentExitNodes pass will work with infinite loops.
This will make CFG with infinite loops be structurized.
Reviewer:
nhaehnle
Differential Revision:
https://reviews.llvm.org/D46340
llvm-svn: 332625
According to alive this is valid. I'm hoping to use this to make an assumption that the sign bit is zero after this sequence. The only way it wouldn't be is if the input was INT__MIN, but by preserving the flags we can make doing this to INT_MIN UB.
The nuw flags is weird because it creates such a contradiction that the original number would have to be positive meaning we could remove the select entirely, but we don't get that far.
Differential Revision: https://reviews.llvm.org/D46988
llvm-svn: 332623
The isReMaterlizable flag is somewhat confusing, unlike most other instruction
flags it is currently interpreted as a hint (mightBeRematerializable would be
a better name). While LUI is always rematerialisable, for an instruction like
ADDI it depends on its operands. TargetInstrInfo::isTriviallyReMaterializable
will call TargetInstrInfo::isReallyTriviallyReMaterializable, which in turn
calls TargetInstrInfo::isReallyTriviallyReMaterializableGeneric. We rely on
the logic in the latter to pick out instances of ADDI that really are
rematerializable.
The isReMaterializable flag does make a difference on a variety of test
programs. The recently committed remat.ll test case demonstrates how stack
usage is reduce and a unnecessary lw/sw can be removed. Stack usage in the
Proc0 function in dhrystone reduces from 192 bytes to 112 bytes.
For the sake of completeness, this patch also implements
RISCVRegisterInfo::isConstantPhysReg. Although this is called from a number of
places, it doesn't seem to result in different codegen for any programs I've
thrown at it. However, it is called in the rematerialisation codepath and it
seems sensible to implement something correct here.
Differential Revision: https://reviews.llvm.org/D46182
llvm-svn: 332617
entries to reach the target. Since these calls don't require type checks,
we can short-circuit them to their real targets.
Differential Revision: https://reviews.llvm.org/D46326
llvm-svn: 332610
Data directives such as .word, .half, .hword are currently parsed using
HexagonAsmParser::ParseDirectiveValue which effectively duplicates logic from
AsmParser::parseDirectiveValue. This patch deletes that duplicated logic in
favour of using addAliasForDirective.
Differential Revision: https://reviews.llvm.org/D46999
llvm-svn: 332607