Previously these instructions were unselectable and instead were generated
through the instruction mapping tables.
Reviewers: atanasyan, smaksimovic, abeserminji
Differential Revision: https://reviews.llvm.org/D46055
llvm-svn: 331165
This patch extends the 'isSVEVectorRegWithShiftExtend' function to
improve diagnostics for SVE's gather load (scalar + vector) addressing
modes. Instead of always suggesting the 'unscaled' addressing mode,
the use of DiagnosticPredicate enables a more specific error message
in the context where the scaling is incorrect. For example:
ld1h z0.d, p0/z, [x0, z0.d, lsl #2]
^
shift amount should be '1'
Instead of suggesting the packed, unscaled addressing mode:
expected 'z[0..31].d, (uxtw|sxtw)'
the assembler now suggests using the proper scaling:
expected 'z[0..31].d, (lsl|uxtw|sxtw) #1'
Reviewers: fhahn, rengolin, samparker, SjoerdMeijer, javed.absar
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D46124
llvm-svn: 331162
The instructions have predicates of Not64BitMode, but there are identical strings in InstAliases that have Mode32Bit and Mode16Bit. But the ordering is uncontrolled and the less specific Not64BitMode was ordered first.
This patch hides the Not64BitMode from the table so there is no conflict anymore.
llvm-svn: 331158
These aliases are used to default the memory forms of call and jmp to the size of the operating mode. This doesn't work for Intel syntax. We have a different hack in the AsmParser code itself to force a size on unsized memory operands.
llvm-svn: 331153
Most of the add<operandname>Operands() functions are the same
and can be replaced by using a single 'RenderMethod' in
the AArch64InstrFormats.td file. Since many of the scaled
immediates (with different scaling/bits) are the same, most of
these can reuse the same AsmOperandClass.
Reviewers: fhahn, rengolin, samparker, SjoerdMeijer, javed.absar
Reviewed By: samparker
Differential Revision: https://reviews.llvm.org/D46122
llvm-svn: 331146
Summary:
This is a follow up to D45420 (included here since it is still under review and this change is dependent on that) and D45072 (committed).
Actual change for this patch is LoopVectorize* and cmakefile. All others are all from D45420.
LoopVectorizationLegality is an analysis and thus really belongs to Analysis tree. It is modular enough and it is reusable enough ---- we can further improve those aspects once uses outside of LV picks up.
Hopefully, this will make it easier for people familiar with vectorization theory, but not necessarily LV itself to contribute, by lowering the volume of code they should deal with. We probably should start adding some code in LV to check its own capability (i.e., vectorization is legal but LV is not ready to handle it) and then bail out.
Reviewers: rengolin, fhahn, hfinkel, mkuper, aemerson, mssimpso, dcaballe, sguggill
Reviewed By: rengolin, dcaballe
Subscribers: egarcia, rogfer01, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D45552
llvm-svn: 331139
This allows the instruction selection to follow mode in Intel syntax. And allows a suffix to be used to change size.
This matches gas behavior from what I could tell.
llvm-svn: 331138
It doesn't really exist. The instruction always writes 16-bits of memory. Putting a REX.w on it won't change anything.
While I was touching the encoding tests to remove it, I added some other missing register form test cases.
llvm-svn: 331135
LLVM_ON_WIN32 is set exactly with MSVC and MinGW (but not Cygwin) in
HandleLLVMOptions.cmake, which is where _WIN32 defined too. Just use the
default macro instead of a reinvented one.
See thread "Replacing LLVM_ON_WIN32 with just _WIN32" on llvm-dev and cfe-dev.
No intended behavior change.
This moves over all uses of the macro, but doesn't remove the definition
of it in (llvm-)config.h yet.
llvm-svn: 331127
Summary: Add bindings to create import declarations for modules, functions, types, and other entities. This wraps the conveniences available in the existing DIBuilder API, but these seem C++-specific.
Reviewers: whitequark, harlanhaskins, deadalnix
Reviewed By: whitequark
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46167
llvm-svn: 331123
Many of these aliases exist to give one syntax or the other a slightly different mnemonic and the other variant gets a duplicate of its normal mnemonic
This patch restricts a lot of these to only one variant so we don't get the duplication.
This removes a lot of duplicate entries from the matcher table. It also reduces the number of warnings printed when you enable the ambiguous match warning in tablegen.
llvm-svn: 331117
Summary:
Previously, a extending load was represented at (G_*EXT (G_LOAD x)).
This had a few drawbacks:
* G_LOAD had to be legal for all sizes you could extend from, even if
registers didn't naturally hold those sizes.
* All sizes you could extend from had to be allocatable just in case the
extend went missing (e.g. by optimization).
* At minimum, G_*EXT and G_TRUNC had to be legal for these sizes. As we
improve optimization of extends and truncates, this legality requirement
would spread without considerable care w.r.t when certain combines were
permitted.
* The SelectionDAG importer required some ugly and fragile pattern
rewriting to translate patterns into this style.
This patch begins changing the representation to:
* (G_[SZ]EXTLOAD x)
* (G_LOAD x) any-extends when MMO.getSize() * 8 < ResultTy.getSizeInBits()
which resolves these issues by allowing targets to work entirely in their
native register sizes, and by having a more direct translation from
SelectionDAG patterns.
This patch introduces the new generic instructions and new variation on
G_LOAD and adds lowering for them to convert back to the existing
representations.
Depends on D45466
Reviewers: ab, aditya_nandakumar, bogner, rtereshin, volkan, rovka, aemerson, javed.absar
Reviewed By: aemerson
Subscribers: aemerson, kristof.beyls, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D45540
llvm-svn: 331115
Summary:
* rL328953 does not include bindings for LLVMDIBuilderCreateClassType and LLVMDIBuilderCreateBitFieldMemberType despite declaring their prototypes. Provide these bindings now.
* Switch to more precise types with specific numeric limits matching the DIBuilder's C++ API.
Reviewers: harlanhaskins, whitequark, deadalnix
Reviewed By: whitequark
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46168
llvm-svn: 331114
Summary:
Masked merge has a pattern of: `((x ^ y) & M) ^ y`.
But, there is no difference between `((x ^ y) & M) ^ y` and `((x ^ y) & ~M) ^ x`,
We should canonicalize the pattern to non-inverted mask.
https://rise4fun.com/Alive/Yol
Reviewers: spatel, craig.topper
Reviewed By: spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45664
llvm-svn: 331112
Summary:
Masked merge has a pattern of: `((x ^ y) & M) ^ y`.
But, there is no difference between `((x ^ y) & M) ^ y` and `((x ^ y) & ~M) ^ x`,
We should canonicalize the pattern to non-inverted mask.
Differential Revision: https://reviews.llvm.org/D45663
llvm-svn: 331111
I intend to add further instruction tests to the resources-x86_64.s test file as required, but this initial commit is to help remove a load of unnecessary InstRW overrides in a future patch
llvm-svn: 331108
These instruction don't use their memory operands as normal memory operands. They're just used as addresses. They don't have a size because they aren't directly representing a load or store.
llvm-svn: 331104
Favor the 0x1a encoding for register/register move to match gas.
The instructions used RM and MR in their name along with rr/rm/mr at the end. To make more consistent with other instructions remove the RM/MR and use rr/rm/mr/rr_REV.
Hide the _REV encoding from the assembler but leave it for the disassembler.
llvm-svn: 331101
The invocation of getExact in ScalarEvolution::getBackedgeTakenInfo is used
only for getting statistic and for assert.
Even if statistics is disabled, the code related to it will be eliminated
the invocation to getExact itself will not be eliminated
because it may have side-effects like creation of new SCEVs.
So do invocation only when we collect statistics or executes asserts.
Reviewers: mkazantsev, sanjoy, javed.absar
Reviewed By: javed.absar
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46178
llvm-svn: 331099
This commit makes it so that if you outline a def of some register, then the
call instruction created by the outliner actually reflects that the register
is defined by the call. It also makes it so that outlined functions don't
have the TracksLiveness property.
Outlined calls shouldn't break liveness assumptions that someone might make.
This also un-XFAILs the noredzone test, and updates the calls test.
llvm-svn: 331095
The effect of doing so is not disrupting the LoopPassManager when mixing this pass with other loop passes. This should help locality of access substaintially and avoids the cost of computing PostDom.
The assumption here is that the full GuardWidening (which does use PostDom) is run as a canonicalization before loop opts and that this version is just catching cases exposed by other loop passes. (i.e. LoopPredication, IndVarSimplify, LoopUnswitch, etc..)
llvm-svn: 331094
Summary:
D42479 (rL329525) enabled SDIV combine for pow2 non-splat vector
dividers. But when there is a 1 in a vector, the instruction sequence to
be generated involves shifting a value by the number of its bit widths,
which is undefined
(c64f4dbfe3/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (L6000-L6006)).
Especially, in architectures that do not support vector instructions,
each of element in a vector will be computed separately using scalar
operations, and then the resulting value will be undef for '1' values
in a vector.
(All 1's vector is fine; only vectors mixed with 1 and others will be
affected.)
Reviewers: RKSimon, jgravelle-google
Subscribers: jfb, dschuff, sbc100, jgravelle-google, llvm-commits
Differential Revision: https://reviews.llvm.org/D46161
llvm-svn: 331092
Summary:
Previously the flag intrinsics always used the index instructions even if a mask instruction also exists.
To fix fix this I've created a single ISD node type that returns index, mask, and flags. The SelectionDAG CSE process will merge all flavors of intrinsics with the same inputs to a s ingle node. Then during isel we just have to look at which results are used to know what instruction to generate. If both mask and index are used we'll need to emit two instructions. But for all other cases we can emit a single instruction.
Since I had to do manual isel anyway, I've removed the pseudo instructions and custom inserter code that was working around tablegen limitations with multiple implicit defs.
I've also renamed the recently added sse42.ll test case to sttni.ll since it focuses on that subset of the sse4.2 instructions.
Reviewers: chandlerc, RKSimon, spatel
Reviewed By: chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46202
llvm-svn: 331091
For local variables the first DW_OP_deref is consumed by turning the
location kind into a memeory location, but that only makes sense for
values that are in a register to begin with, which cannot happen for
global variables that are attached to a symbol.
rdar://problem/39741860
This reapplies r330970 after fixing an uncovered bug in r331086 and
working around the situation caused by it.
llvm-svn: 331090
Now local value sinking only scans and numbers instructions added
between the current flush point and the last flush point. This ensures
that ISel is overall linear in the size of the BB.
Fixes PR37010 and re-enables local value sinking by default.
llvm-svn: 331087
This patch adds support for fragment expressions
TryToShrinkGlobalToBoolean() which were previously just dropped.
Thanks to Reid Kleckner for providing me a reproducer!
llvm-svn: 331086
Summary:
Currently, we
1. match `LHS` matcher to the `first` operand of binary operator,
2. and then match `RHS` matcher to the `second` operand of binary operator.
If that does not match, we swap the `LHS` and `RHS` matchers:
1. match `RHS` matcher to the `first` operand of binary operator,
2. and then match `LHS` matcher to the `second` operand of binary operator.
This works ok.
But it complicates writing of commutative matchers, where one would like to match
(`m_Value()`) the value on one side, and use (`m_Specific()`) it on the other side.
This is additionally complicated by the fact that `m_Specific()` stores the `Value *`,
not `Value **`, so it won't work at all out of the box.
The last problem is trivially solved by adding a new `m_c_Specific()` that stores the
`Value **`, not `Value *`. I'm choosing to add a new matcher, not change the existing
one because i guess all the current users are ok with existing behavior,
and this additional pointer indirection may have performance drawbacks.
Also, i'm storing pointer, not reference, because for some mysterious-to-me reason
it did not work with the reference.
The first one appears trivial, too.
Currently, we
1. match `LHS` matcher to the `first` operand of binary operator,
2. and then match `RHS` matcher to the `second` operand of binary operator.
If that does not match, we swap the ~~`LHS` and `RHS` matchers~~ **operands**:
1. match ~~`RHS`~~ **`LHS`** matcher to the ~~`first`~~ **`second`** operand of binary operator,
2. and then match ~~`LHS`~~ **`RHS`** matcher to the ~~`second`~ **`first`** operand of binary operator.
Surprisingly, `$ ninja check-llvm` still passes with this.
But i expect the bots will disagree..
The motivational unittest is included.
I'd like to use this in D45664.
Reviewers: spatel, craig.topper, arsenm, RKSimon
Reviewed By: craig.topper
Subscribers: xbolva00, wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D45828
llvm-svn: 331085
As suggested in D45842
(although still not sure if we're going to advance that),
we must invalidate references to instructions that have
been recycled (operands were changed, so result is different).
llvm-svn: 331083
Some of the bots were failing in a different way to the others. These were
unable to compare tuples. Fix this by changing to a struct, thereby avoiding
the quirks of tuples.
llvm-svn: 331081
We currently have a hard to solve analysis problem around the order of instructions within a potentially throwing block. We can't cheaply determine whether a given instruction is before the first potential throw in the block. While we're working on that in the background, special case the first instruction within the header.
why this particular special case? Well, headers are guaranteed to execute if the loop does, and it turns out we tend to produce this form in practice.
In a follow on patch, I tend to extend LICM with an alternate approach which works for any instruction in the header before the first throw, but this is the best I can come up with other users of the analysis (such as store promotion.)
Note: I can't show the difference in the analysis result since we're ORing in the expensive instruction walk used by SCEV. Using the full walk is not suitable for a general solution.
llvm-svn: 331079
Summary:
Only allow a single unique .symver alias per symbol. This matches the
behavior of gas. I noticed that we ignored multiple mismatched symver
directives looking at https://reviews.llvm.org/D45798
Reviewers: pcc, tejohnson, espindola
Reviewed By: pcc
Subscribers: emaste, arichardson, llvm-commits, kcc
Differential Revision: https://reviews.llvm.org/D45845
llvm-svn: 331078
Extend the live-in check for all aliased registers so that we can
allow sinking Copy instructions when only implicit def is in successor's
live-in.
llvm-svn: 331072
Summary:
Currently only the memory size is supported but others can be added as
needed.
narrowScalar for G_LOAD and G_STORE now correctly update the
MachineMemOperand and will refuse to legalize atomics since those need more
careful expansions to maintain atomicity.
Reviewers: ab, aditya_nandakumar, bogner, rtereshin, aemerson, javed.absar
Reviewed By: aemerson
Subscribers: aemerson, rovka, kristof.beyls, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D45466
llvm-svn: 331071
The idea is to have a pass which performs the same transformation as GuardWidening, but can be run within a loop pass manager without disrupting the pass manager structure. As demonstrated by the test case, this doesn't quite get there because of issues with post dom, but it gives a good step in the right direction. the motivation is purely to reduce compile time since we can now preserve locality during the loop walk.
This patch only includes a legacy pass. A follow up will add a new style pass as well.
llvm-svn: 331060
These branches were previously unanalyzable and unselectable. Add them and
recognize how to generate their inverses.
Reviewers: smaksimovic, atanasyan, abeserminji
Differential Revision: https://reviews.llvm.org/D46113
llvm-svn: 331050
Put the first ldp at the end, so that the load-store optimizer can run
and merge the ldp and the add into a post-index ldp.
This didn't work in case no frame was needed and resulted in code size
regressions.
llvm-svn: 331044
Summary:
`HandleLLVMOptions` adds `-w` to the cflags if `LLVM_ENABLE_WARNINGS` is not on.
With `-w`, `check_cxx_compiler_flag` doesn't error out for unsupported flags
(for example `-mcrc` on x86_64), and those flags end up being detected as
working - and really they aren't.
I am not entirely sure what the best way to solve this is, but setting
`LLVM_ENABLE_WARNINGS` prior to including `HandleLLVMOptions` does the job.
Reviewers: phosek, beanz
Reviewed By: phosek
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D46079
llvm-svn: 331042
If the MachineInstr uses a custom inserter and is then erased after
instruction selection, there is no use for mapping it to a sched class.
Review: Ulrich Weigand
llvm-svn: 331040
We currently support LCSSA PHI nodes in the outer loop exit, if their
incoming values do not come from the outer loop latch or if the
outer loop latch has a single predecessor. In that case, the outer loop latch
will be executed only if the inner loop gets executed. If we have multiple
predecessors for the outer loop latch, it may be executed even if the inner
loop does not get executed.
This is a first step to support the case described in
https://bugs.llvm.org/show_bug.cgi?id=30472
Reviewers: efriedma, karthikthecool, mcrosier
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D43237
llvm-svn: 331037
This adds IR intrinsics for the AArch64 dot-product instructions introduced in
v8.2-A.
Differential revisioon: https://reviews.llvm.org/D46107
llvm-svn: 331036
Since PTX has grown a <2 x half> datatype vectorization has become more
important. The late LoadStoreVectorizer intentionally only does loads
and stores, but now arithmetic has to be vectorized for optimal
throughput too.
This is still very limited, SLP vectorization happily creates <2 x half>
if it's a legal type but there's still a lot of register moving
happening to get that fed into a vectorized store. Overall it's a small
performance win by reducing the amount of arithmetic instructions.
I haven't really checked what the loop vectorizer does to PTX code, the
cost model there might need some more tweaks. I didn't see it causing
harm though.
Differential Revision: https://reviews.llvm.org/D46130
llvm-svn: 331035
This patch makes compiler does not fuse fmul and fadd/fsub into
fmadd/fmsub by default. Instead, -fp-contract=fast option can
be used when such behavior is desired.
Differential Revision: https://reviews.llvm.org/D46057
llvm-svn: 331033
This adds IR intrinsics for the ARM dot-product instructions introduced in
v8.2-A.
Differential revision: https://reviews.llvm.org/D46106
llvm-svn: 331032
Back when the R52 schedule was added in rL286949, there was no way
to enable machine schedules in ARM for specific cores. Since then a
target feature has been added. This enables the feature for R52,
removing the need to manually specify compiler flags.
llvm-svn: 331027
Summary:
The value tracking analysis uses function alignment to infer that the
least significant bits of function pointers are known to be zero.
Unfortunately, this is not correct for ARM targets: the least
significant bit of a function pointer stores the ARM/Thumb state
information (i.e., the LSB is set for Thumb functions and cleared for
ARM functions).
The original approach (https://reviews.llvm.org/D44781) introduced a
new field for function pointer alignment in the DataLayout structure
to address this. But it seems unlikely that optimizations based on
function pointer alignment would bring much benefit in practice to
justify the additional maintenance burden, so this patch simply
assumes that function pointer alignment is always unknown.
Reviewers: javed.absar, efriedma
Reviewed By: efriedma
Subscribers: kristof.beyls, llvm-commits, hfinkel, rogfer01
Differential Revision: https://reviews.llvm.org/D46110
llvm-svn: 331025
Add new umin creation method which accepts a list of operands.
SCEV does not represents umin which is required in getExact, so
it transforms umin to umax with not. As a result the transformation of
tree of max to max with several operands does not work.
We just use the new introduced method for creation umin from several operands.
Reviewers: sanjoy, mkazantsev
Reviewed By: sanjoy
Subscribers: javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D46047
llvm-svn: 331015
It doesn't unwind, and the wrong marking leads to the creation of an
.eh_frame section when it isn't necessary.
Differential Revision: https://reviews.llvm.org/D46082
llvm-svn: 331008
The program might have unusual expectations for functions; for example,
the Linux kernel's build system warns if it finds references from .text
to .init.data.
I'm not sure this is something we actually want to make any guarantees
about (there isn't any explicit rule that would disallow outlining
in this case), but we might want to be conservative anyway.
Differential Revision: https://reviews.llvm.org/D46091
llvm-svn: 331007
Summary: Also test for symbols information in test/MC/WebAssembly/debug-info.ll.
Subscribers: jfb, dschuff, jgravelle-google, aheejin, sunfish, JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D46160
llvm-svn: 331005
Summary: If file stream arg is not captured and source is fopen, we could replace IO calls by unlocked IO ("_unlocked" function variants) to gain better speed,
Reviewers: efriedma, RKSimon, spatel, sanjoy, hfinkel, majnemer
Subscribers: lebedev.ri, llvm-commits
Differential Revision: https://reviews.llvm.org/D45736
llvm-svn: 331002
Summary: The instruction index was never referenced in the body. Just a minor cleanup.
Reviewers: andreadb
Reviewed By: andreadb
Subscribers: javed.absar, gbedwell, llvm-commits
Differential Revision: https://reviews.llvm.org/D46142
llvm-svn: 331001
Summary:
The '@' character is a special character in Doxygen. In a handful of cases we were not escaping this character which resulted in llvm intrinsics not being rendered properly. Specifically, the @llvm part was removed.
For example, see https://llvm.org/doxygen/classllvm_1_1AssumptionCache.html. There are a few references to '.assume' without the @llvm. prefix. This patch corrects this.
Reviewers: sanjoy
Reviewed By: sanjoy
Subscribers: llvm-commits, sanjoy
Differential Revision: https://reviews.llvm.org/D45981
llvm-svn: 330998
Summary:
Simplify integer add expression X % C0 + (( X / C0 ) % C1) * C0 to
X % (C0 * C1). This is a common pattern seen in code generated by the XLA
GPU backend.
Add test cases for this new optimization.
Patch by Bixia Zheng!
Reviewers: sanjoy
Reviewed By: sanjoy
Subscribers: efriedma, craig.topper, lebedev.ri, llvm-commits, jlebar
Differential Revision: https://reviews.llvm.org/D45976
llvm-svn: 330992
remainder expressions as operands.
Summary:
Add test cases to prepare for the new optimization that Simplifies integer add
expression X % C0 + (( X / C0 ) % C1) * C0 to X % (C0 * C1).
Patch by Bixia Zheng!
Reviewers: sanjoy
Reviewed By: sanjoy
Subscribers: jlebar, llvm-commits
Differential Revision: https://reviews.llvm.org/D46017
llvm-svn: 330991
The main goal of this change is to make it much easier to track which
rules are actually covered by Testgen'erated regression tests.
Reviewers: aemerson, dsanders
Differential Revision: https://reviews.llvm.org/D46095
llvm-svn: 330988
`lb` and `lbu` commands accepts 16-bit signed offsets. But GAS accepts
larger offsets for these commands. If an offset does not fit in 16-bit
range, `lb` command is translated into lui/lb or lui/addu/lb series.
It's interesting that initially LLVM assembler supported this feature,
but later it was broken.
This patch restores support for 32-bit offsets. It replaces `mem_simm16`
operand for `LB` and `LBu` definitions by the new `mem_simmptr` operand.
This operand is intended to check that offset fits to the same size as
using for pointers. Later we will be able to extend this rule and
accepts 64-bit offsets when it is possible.
Some issues remain:
- The regression also affects LD, SD, LH, LHU commands. I'm going
to fix them by a separate patch.
- GAS accepts any 32-bit values as an offset. Now LLVM accepts signed
16-bit values and this patch extends the range to signed 32-bit offsets.
In other words, the following code accepted by GAS and still triggers
an error by LLVM:
```
lb $4, 0x80000004
# gas
lui a0, 0x8000
lb a0, 4(a0)
```
- In case of 64-bit pointers GAS accepts a 64-bit offset and translates
it to the li/dsll/lb series of commands. LLVM still rejects it.
Probably this feature has never been implemented in LLVM. This issue
is for a separate patch.
```
lb $4, 0x800000001
# gas
li a0, 0x8000
dsll a0, a0, 0x14
lb a0, 4(a0)
```
Differential Revision: https://reviews.llvm.org/D45020
llvm-svn: 330983
Summary:
Use the FP for scavenged spill slot accesses to prevent corruption of
the callee-save region when the SP is re-aligned.
Based on problem and patch reported by @paulwalker-arm
This is an alternative to solution proposed in D45770
Reviewers: t.p.northover, paulwalker-arm, thegameg, javed.absar
Subscribers: qcolombet, mcrosier, paulwalker-arm, kristof.beyls, rengolin, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D46063
llvm-svn: 330976
This diff implements --redefine-sym option
for changing the name of a symbol.
Test plan: make check-all
Differential revision: https://reviews.llvm.org/D46029
llvm-svn: 330973
For local variables the first DW_OP_deref is consumed by turning the
location kind into a memeory location, but that only makes sense for
values that are in a register to begin with, which cannot happen for
global variables that are attached to a symbol.
rdar://problem/39741860
llvm-svn: 330970
Summary:
The old comment referred to llvm/IR/Writer.h which doesn't longer exist.
This patch replaces it with an up-to-date description of AsmWriter library.
Patch by Alex Yursha.
Reviewers: gribozavr, vlad.tsyrklevich
Reviewed By: vlad.tsyrklevich
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45895
llvm-svn: 330962
Summary:
Follow-up to D43690, the EliminateAvailableExternally pass currently
runs under -O0 and -O2 and up. Under -O1 we would still want to drop
available_externally symbols to reduce space without inlining having
run.
Reviewers: tejohnson
Reviewed By: tejohnson
Subscribers: mehdi_amini, llvm-commits, kcc
Differential Revision: https://reviews.llvm.org/D46093
llvm-svn: 330961
Correct the definitions of ei, di, eret, deret, wait, syscall and break.
Also provide microMIPS specific aliases to match the MIPS aliases.
Additionally correct the definition of the wait instruction so that
it is present in the instruction mapping tables.
Reviewers: smaksimovic, abeserminji, atanasyan
Differential Revision: https://reviews.llvm.org/D45939
llvm-svn: 330952
As noted, the attribute name is subject to change once we have
the clang side implemented, but it's clear that we need some
kind of attribute-based predication here based on the discussion
for:
rL330437
llvm-svn: 330951
This is another preliminary step for disabling this transform as
discussed in the post-commit thread for:
rL330437
I'm using one of the names suggested there for the attribute, but
we can fix that up as needed once the clang side of this is sorted
out.
llvm-svn: 330950
This causes some slight shuffling but no meaningful codegen differences on the
corpus I used for testing, but it has a larger impact when combined with e.g.
rematerialisation. Regardless, it makes sense to report as accurate
target-specific information as possible.
llvm-svn: 330949
There's no direct instruction for this, but it's trivially implemented
with two movs. Without this the code generator just dies when
encountering a shufflevector.
Differential Revision: https://reviews.llvm.org/D46116
llvm-svn: 330948
As discussed in the post-review comments for rL330437,
we need to guard this fold to allow existing code to
keep working with the undefined behavior that they've
come to rely on.
That would mean duplicating more code than we already
have, so let's fix that first.
llvm-svn: 330947
This patch adds a new shuffle kind useful for transposing a 2xn matrix. These
transpose shuffle masks read corresponding even- or odd-numbered vector
elements from two n-dimensional source vectors and write each result into
consecutive elements of an n-dimensional destination vector. The transpose
shuffle kind is meant to model the TRN1 and TRN2 AArch64 instructions. As such,
this patch also considers transpose shuffles in the AArch64 implementation of
getShuffleCost.
Differential Revision: https://reviews.llvm.org/D45982
llvm-svn: 330941
I'm unable to construct a representative test case that demonstrates the
advantage, but it seems sensible to report accurate target-specific
information regardless.
llvm-svn: 330938
This patch extends the PredicateMethod of AsmOperands used in SVE's
LD1 instructions with a DiagnosticPredicate. This makes them 'context
sensitive' to the operand that has been parsed and tells the user to
use the right register (with expected shift/extend), rather than telling
the immediate is out of range when it actually parsed a register.
Patch [2/2] in a series to improve assembler diagnostics for SVE:
- Patch [1/2]: https://reviews.llvm.org/D45879
- Patch [2/2]: https://reviews.llvm.org/D45880
Reviewers: olista01, stoklund, craig.topper, mcrosier, rengolin, echristo, fhahn, SjoerdMeijer, evandro, javed.absar
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D45880
llvm-svn: 330934
This has no impact on codegen for the current RISC-V unit tests or my small
benchmark set and very minor changes in a few programs in the GCC torture
suite. Based on this, I haven't been able to produce a representative test
program that demonstrates a benefit from isLegalAddressingMode. I'm committing
the patch anyway, on the basis that presenting accurate information to the
target-independent code is preferable to relying on incorrect generic
assumptions.
llvm-svn: 330932
An optional, light-weight and backward-compatible mechanism to allow
specifying that a diagnostic _only_ applies to a partial mismatch (NearMiss),
rather than a full mismatch.
Patch [1/2] in a series to improve assembler diagnostics for SVE.
- Patch [1/2]: https://reviews.llvm.org/D45879
- Patch [2/2]: https://reviews.llvm.org/D45880
Reviewers: olista01, stoklund, craig.topper, mcrosier, rengolin, echristo, fhahn, SjoerdMeijer, evandro, javed.absar
Reviewed By: olista01
Differential Revision: https://reviews.llvm.org/D45879
llvm-svn: 330930
LLVM might be compiled using a toolchain file which controls the linker
to use via flags (e.g. `-B` or `-fuse-ld=`). Take these flags into
account for linker detection. We can also correct the detection by
manually passing LLVM_USE_LINKER, of course, but it seems more
convenient to have the detection take flags into account.
Differential Revision: https://reviews.llvm.org/D45464
llvm-svn: 330924
instructions.
These have special permission according to the x86 manual to read
unaligned memory, and this folding is done by ICC and GCC as well.
This corrects one of the issues identified in PR37246.
llvm-svn: 330896
comparison instructions (pcmp[ei]stri*).
These will help show improvements from fixes to PR37246.
I've not really covered the mask forms of this intrinsic as I don't have
as good of an intuition about the likely usage patterns there. Happy for
someone to extend this with tests covering the mask form.
llvm-svn: 330895
This reverts commit 023c8be90980e0180766196cba86f81608b35d38.
This patch triggers miscompile of zlib on PowerPC platform. Most likely it is
caused by some pre-backend PPC-specific pass, but we don't clearly know the
reason yet. So we temporally revert this patch with intention to return it
once the problem is resolved. See bug 37229 for details.
llvm-svn: 330893
If no data or instructions are emitted after a location directive, we
should clear the cv_loc when we change sections, or it will be emitted
at the beginning of the next section. This violates our invariant that
all .cv_loc directives belong to the same section. Add clearer
assertions for this.
llvm-svn: 330884
This makes it possible to reverse a filtered range. For example, here's
a way to visit memory accesses in a BasicBlock in reverse order:
auto MemInsts = reverse(make_filter_range(BB, [](Instruction &I) {
return isa<StoreInst>(&I) || isa<LoadInst>(&I);
}));
for (auto &MI : MemInsts)
...
To implement this functionality, I factored out forward iteration
functionality into filter_iterator_base, and added a specialization of
filter_iterator_impl which supports bidirectional iteration. Thanks to
Tim Shen, Zachary Turner, and others for suggesting this design and
providing feedback! This version of the patch supersedes the original
(https://reviews.llvm.org/D45792).
This was motivated by a problem we encountered in D45657: we'd like to
visit the non-debug-info instructions in a BasicBlock in reverse order.
Testing: check-llvm, check-clang
Differential Revision: https://reviews.llvm.org/D45853
llvm-svn: 330875
Algorithmically compute the 'x20' SDIV/UDIV vector costs - this is necessary for PR36550 when DIV costs will be driven from the scheduler models.
llvm-svn: 330870
lit is picking up a stale executable in the unittests tree, which is
failing on Windows.
To simplify the CMake and avoid problems like this in the future, now we
always compile the test, but the test exits successfully when plugins
are not enabled.
llvm-svn: 330867
- Add "amdgpu-waitcnt-forcezero" to force all waitcnt instrs to be emitted as s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
- Add debug counters to control force emit of s_waitcnt instrs; debug counters:
si-insert-waitcnts-forceexp: force emit s_waitcnt expcnt(0) instrs
si-insert-waitcnts-forcevm: force emit s_waitcnt lgkmcnt(0) instrs
si-insert-waitcnts-forcelgkm: force emit s_waitcnt vmcnt(0) instrs
- Add some debug statements
Note that a variant of this patch was previously committed/reverted.
Differential Revision: https://reviews.llvm.org/D45888
llvm-svn: 330862
Debug var, expr and loc were only supported for non-fixed stack objects.
This patch adds the following fields to the "fixedStack:" entries, and
renames the ones from "stack:" to:
* debug-info-variable
* debug-info-expression
* debug-info-location
Differential Revision: https://reviews.llvm.org/D46032
llvm-svn: 330859
Imports in a wasm module can have custom module name. This change
adds the module name to the WasmSymbol structure so that the linker
can preserve this module name.
This is needed to fix: https://bugs.llvm.org/show_bug.cgi?id=37168
Differential Revision: https://reviews.llvm.org/D45797
llvm-svn: 330854
Previously we only formed MUL_IMM when we split a constant. This blocked load folding on those cases. We should also form MUL_IMM for 3/5/9 to favor LEA over load folding.
Differential Revision: https://reviews.llvm.org/D46040
llvm-svn: 330850
Previously `call zero`, `call f0` etc would fail. This leads to compilation
failures if building programs that define functions with those names and using
-save-temps.
llvm-svn: 330846
Summary:
When performing indirect call promotion, current implementation inspects "all" parameters of the callsite and attemps to match with the formal argument type of the callee function. However, it is not possible to find the type for variable length arguments, and the compiler crashes when it attemps to match the type for variable lenght argument.
It seems that the bug is introduced with D40658. Prior to that, the type matching is performed only for the parameters whose ID is less than callee->getFunctionNumParams(). The attached test case will crash without the patch.
Reviewers: mssimpso, davidxl, davide
Reviewed By: mssimpso
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D46026
llvm-svn: 330844
Virtually all other tablegen outputs are called .inc, not .gen, so rename these two too for consistency.
No behavior change.
https://reviews.llvm.org/D46058
llvm-svn: 330843
As discussed in D45862, we want to delete parts of
this code because it can create more instructions
than it removes. But we also want to preserve some
folds that are winners, so tidy up what's here to
make splitting the good from bad a bit easier.
llvm-svn: 330841
As discussed in D45862, we want these folds sometimes
because they're good improvements.
But as we can see here, the current logic doesn't
check uses and doesn't produce optimal code in all
cases.
llvm-svn: 330837
To do this:
1. Change GlobalAddress SDNode to TargetGlobalAddress to avoid legalizer
split the symbol.
2. Change ExternalSymbol SDNode to TargetExternalSymbol to avoid legalizer
split the symbol.
3. Let PseudoCALL match direct call with target operand TargetGlobalAddress
and TargetExternalSymbol.
Differential Revision: https://reviews.llvm.org/D44885
llvm-svn: 330827
To do this:
1. Add PseudoCALLIndirct to match indirect function call.
2. Add PseudoCALL to support parsing and print pseudo `call` in assembly
3. Expand PseudoCALL to the following form with R_RISCV_CALL relocation type
while encoding:
auipc ra, func
jalr ra, ra, 0
If we expand PseudoCALL before emitting assembly, we will see auipc and jalr
pair when compile with -S. It's hard for assembly parser to parsing this
pair and identify it's semantic is function call and then insert R_RISCV_CALL
relocation type. Although we could insert R_RISCV_PCREL_HI20 and
R_RISCV_PCREL_LO12_I relocation types instead of R_RISCV_CALL.
Due to RISCV relocation design, auipc and jalr pair only can relax to jal with
R_RISCV_CALL + R_RISCV_RELAX relocation types.
We expand PseudoCALL as late as encoding(RISCVMCCodeEmitter) instead of before
emitting assembly(RISCVAsmPrinter) because we want to preserve call
pseudoinstruction in assembly code. It's more readable and assembly parser
could identify call assembly and insert R_RISCV_CALL relocation type.
Differential Revision: https://reviews.llvm.org/D45859
llvm-svn: 330826
ISel is currently picking 'JAL' over 'JAL_MM' for calling a function when
targeting microMIPS. A later patch will correct this behaviour.
This patch extends the mechanism for transforming instructions into their short
delay to recognise 'JAL_MM' for transforming into 'JALS_MM'.
llvm-svn: 330825
With this patch, options to add/tweak views are all grouped together in the
-help output.
The new "View Options" category looks like this:
```
View Options:
-dispatch-stats - Print dispatch statistics
-instruction-info - Print the instruction info view
-instruction-tables - Print instruction tables
-register-file-stats - Print register file statistics
-resource-pressure - Print the resource pressure view
-retire-stats - Print retire control unit statistics
-scheduler-stats - Print scheduler statistics
-timeline - Print the timeline view
-timeline-max-cycles=<uint> - Maximum number of cycles in the timeline view. Defaults to 80 cycles
-timeline-max-iterations=<uint> - Maximum number of iterations to print in timeline view
```
llvm-svn: 330816