These are used in SReg_32 and when we start to use SGPR_LO16
there will be compaints that not all registers in RC support
all subreg indexes. For now it is NFC.
Unused regunits are reserved so that verifier does not complain
about missing phys reg live-ins.
Differential Revision: https://reviews.llvm.org/D78591
These are used in SReg_32 and when we start to use SGPR_LO16
there will be compaints that not all registers in RC support
all subreg indexes. For now it is NFC.
Unused regunits are reserved so that verifier does not complain
about missing phys reg live-ins.
Differential Revision: https://reviews.llvm.org/D78591
Implement passing of ByVal formal arguments when the argument is passed
partly in the argument registers, with the remainder of the argument
passed on the stack.
Differential Revision: https://reviews.llvm.org/D78515
Similar to code in `getAArch64Cmp` in AArch64ISelLowering.
When we get a compare against a constant, sometimes, that constant isn't valid
for selecting an immediate form.
However, sometimes, you can get a valid constant by adding 1 or subtracting 1,
and updating the condition code.
This implements the following transformations when valid:
- x slt c => x sle c - 1
- x sge c => x sgt c - 1
- x ult c => x ule c - 1
- x uge c => x ugt c - 1
- x sle c => x slt c + 1
- x sgt c => s sge c + 1
- x ule c => x ult c + 1
- x ugt c => s uge c + 1
Valid meaning the constant doesn't wrap around when we fudge it, and the result
gives us a compare which can be selected into an immediate form.
This also moves `getImmedFromMO` higher up in the file so we can use it.
Differential Revision: https://reviews.llvm.org/D78769
I've modified isTruncateFree to get an accurate cost for types that need to be split. I'm planning to look into fixing it for all vectors, but need more cost cleanups first.
Differential Revision: https://reviews.llvm.org/D78973
This is primarily motivated by the desire to move from Python2 to
Python3. `PYTHON_EXECUTABLE` is ambiguous. This explicitly identifies
the python interpreter in use. Since the LLVM build seems to be able to
completed successfully with python3, use that across the build. The old
path aliases `PYTHON_EXECUTABLE` to be treated as Python3.
In `InstCombiner::visitAdd()`, we have
```
// A+B --> A|B iff A and B have no bits set in common.
if (haveNoCommonBitsSet(LHS, RHS, DL, &AC, &I, &DT))
return BinaryOperator::CreateOr(LHS, RHS);
```
so we should handle such `or`'s here, too.
Add support for reserving LR in:
* the driver through `-ffixed-x30`
* cc1 through `-target-feature +reserve-x30`
* the backend through `-mattr=+reserve-x30`
* a subtarget feature `reserve-x30`
the same way we're doing for the other registers.
This changes the logic with lowering fp16 bitcasts to always produce
either a VMOVhr or a VMOVrh, instead of only trying to do it with
certain surrounding nodes. To perform the same optimisations demand bits
and known bits information has been added for them.
Differential Revision: https://reviews.llvm.org/D78587
getTargetStreamer() might return null (e.g. when running inlined-strings.ll test),
downcasting to a reference will be wrong. This is detectable with -fsanitize=null.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D78686
Remove bad kill flags fom load-and-test.mir as discovered by
https://reviews.llvm.org/D78586: "[MachineVerifier] Add more checks for
registers in live-in lists".
Review: Ulrich Weigand
Summary:
Changing all mnemonic to match assembly instructions to simplify mnemonic
naming rules. This time update all branch instructions. This also change
to use %s10 register consistently.
Differential Revision: https://reviews.llvm.org/D78889
Summary:
Add simm7fp/mimmfp to represent floating point immediate values.
Also clean multiclasses to define floating point arithmetic instructions
to handle simm7fp/mimmfp operands. Also add several regression tests
for new operands.
Differential Revision: https://reviews.llvm.org/D78887
Currently, on PowerPC target, it uses function scope UnsafeFPMath
option to drive Machine Combiner pass.
This is not accurate in two ways:
1: the scope is not accurate. Machine Combiner pass only requires
instruction-level flags instead of the function scope.
2: the float point flag is not accurate. Machine Combiner pass
only requires float point flags reassoc and nsz.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D78183
This method has been commented as deprecated for a while. Remove
it and replace all uses with the equivalent getCalledOperand().
I also made a few cleanups in here. For example, to removes use
of getElementType on a pointer when we could just use getFunctionType
from the call.
Differential Revision: https://reviews.llvm.org/D78882
Summary:
In the ppc-expand-isel pass, we use stepForward() to update the
liveins, this function is not recommended, because it needs the
accurate kill info.
This patch uses the function computeAndAddLiveIns() to update the
liveins, it's the recommended method and can fix the liveins bug for
ppc-expand-isel pass..
Reviewed By: efriedma, lkail
Differential Revision: https://reviews.llvm.org/D78657
This is primarily motivated by the desire to move from Python2 to
Python3. `PYTHON_EXECUTABLE` is ambiguous. This explicitly identifies
the python interpreter in use. Since the LLVM build seems to be able to
completed successfully with python3, use that across the build. The old
path aliases `PYTHON_EXECUTABLE` to be treated as Python3.
Fix handling of relocations with r_extern == 0.
If r_extern == 0 then r_symbolnum is an index of a section rather than a symbol index.
Patch by Seiya Nuta and Alexander Shaposhnikov.
Test plan: make check-all
Differential revision: https://reviews.llvm.org/D78946
Add llvm.call.preallocated.{setup,arg} instrinsics.
Add "preallocated" operand bundle which takes a token produced by llvm.call.preallocated.setup.
Add "preallocated" parameter attribute, which is like byval but without the copy.
Verifier changes for these IR constructs.
See https://github.com/rnk/llvm-project/blob/call-setup-docs/llvm/docs/CallSetup.md
Subscribers: hiraditya, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74651
Until recently yaml2obj didn't properly support relocations for MachO.
This behavior resulted in binaries having invalid relocations.
In this diff we adjust the existing tests as follows:
for the tests which don't actually look at any relocations they are removed,
for the tests which essentially depend on relocations they are fixed.
Test plan: make check-all
Differential revision: https://reviews.llvm.org/D78898
The code assumed that zero-extending the integer constant to the
designated alloc size would be fine even for BE targets, but that's not
the case as that pulls in zeros from the MSB side while we actually
expect the padding zeros to go after the LSB.
I've changed the codepath handling the constant integers to use the
store size for both small(er than u64) and big constants and then add
zero padding right after that.
Differential Revision: https://reviews.llvm.org/D78011
like .cfi_restore"
Insert .cfi_offset/.cfi_register when IncomingCSRSaved of current block
is larger than OutgoingCSRSaved of its previous block.
Original commit message:
https://reviews.llvm.org/D42848 only handled CFA related cfi directives but
didn't handle CSR related cfi. The patch adds the CSR part. Basically it reuses
the framework created in D42848. For each basicblock, the patch tracks which
CSR set have been saved at its CFG predecessors's exits, and compare the CSR
set with the set at its previous basicblock's exit (The previous block is the
block laid before the current block). If the saved CSR set at its previous
basicblock's exit is larger, .cfi_restore will be inserted.
The patch also generates proper .cfi_restore in epilogue to make sure the
saved CSR set is consistent for the incoming edges of each block.
Differential Revision: https://reviews.llvm.org/D74303
All avx512 truncate instructions except vXi64->vXi32 are 2 uops
on port 5. So raise their costs to 2. Except when we have an
earlier faster sequence like pshufb for 128 bit input vectors.
Add a lower cost of 3 v16i16->v16i8 with avx512f where we can
extend to v16i32 then truncate. And a cost of 2 for avx512bw with
and without avx512vl. There we can use vpmovwb with either a ymm
or zmm input. Both of these beat masking, splitting, and using
packuswb which is our avx/avx2 codegen.
The tl;dr story is that this causes jumps in the emitted line
tables, even at `-O0`. We could at some point consider more fancy
solutions to preserve locations, but it doesn't seem to be worth
the effort for now.
<rdar://problem/62460788>
Differential Revision: https://reviews.llvm.org/D78947
Tail Calls were initially disabled for PC Relative code because it was not safe
to make certain assumptions about the tail calls (namely that all compiled
functions no longer used the TOC pointer in R2). However, once all of the
TOC pointer references have been removed it is safe to tail call everything
that was tail called prior to the PC relative additions as well as a number of
new cases.
For example, it is now possible to tail call indirect functions as there is no
need to save and restore the TOC pointer for indirect functions if the caller
is marked as may clobber R2 (st_other=1). For the same reason it is now also
possible to tail call functions that are external.
Differential Revision: https://reviews.llvm.org/D77788
D63847 added `MCInstrAnalysis::evaluateMemoryOperandAddress()`. This patch
leverages the feature to print the target addresses for evaluable instructions.
```
-400a: movl 4080(%rip), %eax
+400a: movl 4080(%rip), %eax # 5000 <data1>
```
This patch also deletes `MIA->isCall(Inst) || MIA->isUnconditionalBranch(Inst) || MIA->isConditionalBranch(Inst)`
which is used to guard `MCInstrAnalysis::evaluateBranch()`
Reviewed By: jhenderson, skan
Differential Revision: https://reviews.llvm.org/D78776
Profile and profile summary are usually read only once and then annotated
on IR. The profile summary metadata on IR should include the value of the
newly added partial profile flag, so that compilation phase like thinlto
postlink can get the full set of profile information.
Differential Revision: https://reviews.llvm.org/D78310
There are some intrinsics like this that currently block tail
predication, but should be fine. This allows fma through, as the one
that I ran into. There may be others that need the same treatment but
I've only done this one here.
Differential Revision: https://reviews.llvm.org/D78385
The insert(truncate/extend(extract(vec0,c0)),vec1,c1) case in rGacbc5ede99 wasn't combining the 'mineltsize' with the src vector elt size which may be smaller due to implicit extension during extraction.
Reduced from test case provided by @mstorsjo
When compiling for a arm5te cpu from clang, the +dsp attribute is set.
This meant we could try and generate qadd8 instructions where we would
end up having no pattern. I've changed the condition here to be hasV6Ops
&& hasDSP, which is what other parts of ARMISelLowering seem to use for
similar instructions.
Fixed PR45677.
Differential Revision: https://reviews.llvm.org/D78877