D79003/rG9fa58d1bf2f8 exposed an issue with scalarizeBinOpOfSplats that we were extracting from the splatted vector result instead of the source, the splat index is only valid for the source vector not the result, which may contain undefs, including at the splat index.
This reverts commit 21dadd774f.
In at least PromoteIntBinOps, they wanted to know about users of *all* values
produced by the node not just the integer being promoted. For example not
replacing chain users if the operation was a load breaks the ordering of the
DAG.
This implements a new "Excluded" key that can be used
to exclude entries from section header:
```
SectionHeaderTable:
Sections:
...
Excluded:
- Name: .foo
```
Differential revision: https://reviews.llvm.org/D81005
Describe parameter's value loaded by MIPS ADDiu instruction.
When parameter's value is loaded into a register by mips ADDiu/DADDiu
instruction, it could be described correctly and emitted as
DW_AT_GNU_call_site_value.
Patch by Nikola Tesic
Differential revision: https://reviews.llvm.org/D78108
Summary: A bug is reported in bugzilla-45628, where the swap_with_shift case can’t be matched to a single HW instruction xxswapd as expected. In fact the case matches the idiom of rotate, but PPC doesn’t support ROTL v1i128.
This is a NFC patch for testing ROTL with v1i128 at master.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D81073
Summary:
Gather definitions of SDNodeXForm and change them to call C functions
instead of copying C expressions in td files. Doing this solved some
bugs in mimm detections.
Differential Revision: https://reviews.llvm.org/D81132
We have unobvious issue in the condition that is used to check
that we do not read past the EOF.
The problem is that the result of "GnuHashTable->nbuckets * 4" expression is uint32.
Because of that it was still possible to overflow it and pass the check.
There was no such problem with the "GnuHashTable->maskwords * sizeof(typename ELFT::Off)"
condition, because of `sizeof` on the right (which gives 64-bits value on x64),
but I've added an explicit conversion to 64-bit value for `GnuHashTable->maskwords` too.
Differential revision: https://reviews.llvm.org/D81103
Summary:
The syntax tree test uses a helper function that executes all testing
assertions. When an assertion fails, the only line number that gets
printed to the log refers to the helper function. After this change, we
would also get the line number of the EXPECT_TRUE macro invocation
(unfortunately, the line number of the last token of it, not the first
one, but there's not much I can do about it).
Reviewers: hlopko, eduucaldas
Reviewed By: hlopko, eduucaldas
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D81107
On PowerPC, FNMSUB (both VSX and non-VSX version) means -(a*b-c). But
the backend used to generate these instructions regardless whether nsz
flag exists or not. If a*b-c==0, such transformation changes sign of
zero.
This patch introduces PPC specific FNMSUB ISD opcode, which may help
improving combined FMA code sequence.
Reviewed By: steven.zhang
Differential Revision: https://reviews.llvm.org/D76585
Allow InvokeInst to have the second optional prof branch weight for
its unwind branch. InvokeInst is a terminator with two successors.
It might have its unwind branch taken many times. If so
the BranchProbabilityInfo unwind branch heuristic can be inaccurate.
This patch allows a higher accuracy calculated with both branch
weights set.
Changes:
- A new section about InvokeInst is added to
the BranchWeightMetadata page. It states the old information that
missed in the doc and adds new about the second branch weight.
- Verifier is changed to allow either 1 or 2 branch weights
for InvokeInst.
- A new test is written for BranchProbabilityInfo to demonstrate
the main improvement of the simple fix in calcMetadataWeights().
- Several new testcases are created for Inliner. Those check that
both weights are accounted for invoke instruction weight
calculation.
- PGOUseFunc::setBranchWeights() is fixed to be applicable to
InvokeInst.
Reviewers: davidxl, reames, xur, yamauchi
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80618
In the similar review D81128, Jonas pointed out some style errors that also
apply to D80775 (which is already committed). Also applying the changes
suggested there to this code.
Remove the function Instruction::setProfWeight() and make
use of Instruction::copyMetadata(.., {LLVMContext::MD_prof}).
This is correct for all use cases of setProfWeight() as it
is applied to CallBase instructions only.
This change results in prof metadata copied intact even if
the source has "VP". The old pair of calls
extractProfTotalWeight() + setProfWeight() resulted in
setting branch_weights if the source had "VP" data.
Reviewers: yamauchi, davidxl
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80987
Casts and intrinsics are now handled by the default implementation
of getUserCost, so remove them from the backends switch statement.
https://reviews.llvm.org/D80994
Summary:
Fortran::evaluate::IsConstantExpr did not check that the numerator
was a constant expression. This patch fixes the issue.
Reviewers: DavidTruby, klausler, schweitz, PeteSteinfeld, jdoerfert, sscalpone
Reviewed By: klausler, PeteSteinfeld, sscalpone
Subscribers: llvm-commits
Tags: #llvm, #flang
Differential Revision: https://reviews.llvm.org/D81096
Summary:
Experiments show that inline deferral past pre-inlining slightly
pessimizes the performance.
This patch introduces an option to control inline deferral during PGO.
The option defaults to true for now (that is, NFC).
Reviewers: davidxl
Reviewed By: davidxl
Subscribers: eraman, hiraditya, haicheng, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80776
I think these are left over from when we used to type legalize
v2f32 loads using bitcast+scalar_to_vec+loadi64 on 64-bit targets.
These days we use loadf64. If this becomes a problem a better
solution would be a DAG combine to turn it into scalar_to_vec+loadf64.
Summary:
Change to use EXTRACT_SUBREG instead of COPY_TO_REGCLASS in order to
remove unnecessary copy instructions.
Differential Revision: https://reviews.llvm.org/D81129
In an earlier patch I removed the need for
IITDescriptor::ScalableVecArgument, which involved changing
DecodeIITType to pull out the last IIT_Info from the list. However,
it turns out this is unsafe and causes ubsan failures. I've tried to
fix this a different way by simply passing the last IIT_Info as an
additional argument to DecodeIITType.
Differential Revision: https://reviews.llvm.org/D81057
Previously, this would fail if the builtin headers had been "claimed" by
a different module that wraps these builtin headers. libc++ does this,
for example.
This change adds a test demonstrating this situation; the test fails
without the fix.
Summary:
This patch adds support for dumping .dot
representation of SelectionDAG. It is inspired from the fact that,
a developer may want to just dump the graph at
a predictable path with a simple name to compare.
The exisitng utility (i.e. viewGraph) are overkill
for this motive hence this patch adds the requires support
while using the core routines from GraphWriter.
Example usage: DAG.dumpDotGraph("/tmp/graph.dot", "MyGraph")
will create /tmp/graph.dot file when DAG is an
object of SelectionDAG class.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D80711
To do so, I had to sink the old school inline operand handling into GCStatepointInst which is non ideal. This code should be removed shortly and I was able to at least clean it up a bunch.
We introduced the GCStatepointInst class and have migrated almost all users of Statepoint/ImmutableStatepoint to the new API. Given downstream consumers have had a week to migrate, remove code which is now dead.
llvm-cov.test and many Inputs/test* files contain wrong tests.
This patch rewrites a large portion of these files.
The pre-canned .gcno & .gcda are replaced by binaries produced by
clang --coverage (compatible with gcov 4.8~7)
(after some GCDAProfiling.c bugs were fixed by my previous commits).
Also make llvm-cov gcov on a little-endian host capable to parse big-endian .gcno and .gcda,
and make llvm-cov gcov on big-endian host capable to parse little-endian .gcno and .gcda
constexpr variables are compile time constants and implicitly const, therefore
they are safe to emit on both device and host side. Besides, in many cases
they are intended for both device and host, therefore it makes sense
to emit them on both device and host sides if necessary.
In most cases constexpr variables are used as rvalue and the variables
themselves do not need to be emitted. However if their address is taken,
then they need to be emitted.
For C++14, clang is able to handle that since clang emits them with
available_externally linkage together with the initializer.
However for C++17, the constexpr static data member of a class or template class
become inline variables implicitly. Therefore they become definitions with
linkonce_odr or weak_odr linkages. As such, they can not have available_externally
linkage.
This patch fixes that by adding implicit constant attribute to
file scope constexpr variables and constexpr static data members
in device compilation.
Differential Revision: https://reviews.llvm.org/D79237
This patch helps infer the endianness of DWARF sections from `FileHeader`.
Reviewed By: jhenderson, grimar
Differential Revision: https://reviews.llvm.org/D81051
This patch enables yaml2obj to emit the .debug_aranges section in ELFYAML.
Known issues:
- The current implementation of `debug_aranges` doesn't support emitting `segment` in the `(segment, address, length)` tuple. I will fix it in a follow-up patch.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D80972
The AMDGPU lowering for unconstrained G_FDIV sometimes needs to
introduce a mode switch in the middle, so it's helpful to have
constrained instructions available to legalize this. Right now nothing
is preventing reordering of the mode switch with the other
instructions in the expansion.
When we rematerialize a value as part of the coalescing, we may
widen the register class of the destination register.
When this happens, updateRegDefUses may create additional subranges
to account for the wider register class.
The created subranges are empty and if they are not defined by
the rematerialized instruction we clean them up.
However, if they are defined by the rematerialized instruction but
unused, we failed to flag them as dead definition and would leave
them as empty live-range.
This is wrong because empty live-ranges don't interfere with anything,
thus if we don't fix them, we would fail to account that the
rematerialized instruction clobbers some lanes.
E.g., let us consider the following pseudo code:
def.lane_low64:reg128 = ldimm
newdef:reg32 = COPY def.lane_low64_low32
When rematerialization happens for newdef, we end up with:
newdef.lane_low64:reg128 = ldimm
= use newdef.lane_low64_low32
Let's look at the live interval of newdef.
Before rematerialization, we would get:
newdef [defIdx, useIdx:0) 0@defIdx
Right after updateRegDefUses, newdef register class is widen to reg128
and the subrange definitions will be augmented to fill the subreg that
is used at the definition point, here lane_low64.
The resulting live interval would be:
newdef [newDefIdx, useIdx:0) 0@newDefIdx
* lane_low64_high32 EMPTY
* lane_low64_low32 [newDefIdx, useIdx:0)
Before this patch this would be the final status of the live interval.
Therefore we miss that lane_low64_high32 is actually live on the
definition point of newdef.
With this patch, after rematerializing, we check all the added subranges
and for the ones that are defined but empty, we flag them as dead def.
Thus, in that case, newdef would look like this:
newdef [newDefIdx, useIdx:0) 0@newDefIdx
* lane_low64_high32 [newDefIdx, newDefIdxDead) ; <-- instead of EMPTY
* lane_low64_low32 [newDefIdx, useIdx:0)
This fixes https://www.llvm.org/PR46154