Summary:
The JumpThreading pass has several locations where to the variable name LI
refers to a LoadInst type. This is confusing and inhibits the ability to use
LI for LoopInfo as a member of the JumpThreading class. Minor formatting
and comments were also altered to reflect this change.
Reviewers: dberlin, kuba, spop, sebpop
Reviewed by: sebpop
Subscribers: sebpop, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D42601
llvm-svn: 323695
We currently emit up to 15-byte NOPs on all targets (apart from Silvermont), which stalls performance on some targets with decoders that struggle with 2 or 3 more '66' prefixes.
This patch flags recent AMD targets (btver1/znver1) to still emit 15-byte NOPs and bdver* targets to emit 11-byte NOPs. All other targets now emit 10-byte NOPs apart from SilverMont CPUs which still emit 7-byte NOPS.
Differential Revision: https://reviews.llvm.org/D42616
llvm-svn: 323693
Summary:
Apparently, we missed on constraining register classes of VReg-operands of all the instructions
built from a destination pattern but the root (top-level) one. The issue exposed itself
while selecting G_FPTOSI for armv7: the corresponding pattern generates VTOSIZS wrapped
into COPY_TO_REGCLASS, so top-level COPY_TO_REGCLASS gets properly constrained,
while nested VTOSIZS (or rather its destination virtual register to be exact) does not.
Fixing this by issuing GIR_ConstrainSelectedInstOperands for every nested GIR_BuildMI.
https://bugs.llvm.org/show_bug.cgi?id=35965
rdar://problem/36886530
Patch by Roman Tereshin
Reviewers: dsanders, qcolombet, rovka, bogner, aditya_nandakumar, volkan
Reviewed By: dsanders, qcolombet, rovka
Subscribers: aemerson, javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D42565
llvm-svn: 323692
r323476 added support for DW_FORM_line_strp, and incorrectly made that
depend on having a DWARFUnit available. We shouldn't be tracking
.debug_line_str in DWARFUnit after all. After this patch, I can do an
NFC follow up and undo a bunch of the "plumbing" part of r323476.
Differential Revision: https://reviews.llvm.org/D42609
llvm-svn: 323691
Summary:
It seems it's main effect is to create addition copies when values are inr register that do not support this trick, which increase register pressure and makes the code bigger.
The main noteworthy regression I was able to observe was pattern of the type (setcc (trunc (and X, C)), 0) where C is such as it would benefit from the hi register trick. To prevent this, a new pattern is added to materialize such pattern using a 32 bits test. This has the added benefit of working with any constant that is materializable as a 32bits immediate, not just the ones that can leverage the high register trick, as demonstrated by the test case in test-shrink.ll using the constant 2049 .
Reviewers: craig.topper, niravd, spatel, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42646
llvm-svn: 323690
Prior to committing r323681, we decided to change pick() to identity() since
it wasn't clear from the name what pick() did. However, identity() isn't a very
good name either since it implies that no changes are made. For some reason,
naming it changeTo() didn't occur to me until just after the commit. This
should resolve the lack of clarity that pick() had while still implying that
it changes the MIR.
llvm-svn: 323689
Rafael pointed out that `hasInternalLinkage() || hasPrivateLinkage()` is
equivalent to `hasLocalLinkage()` in post-commit review.
I'm intentionally not updating the comment, partly because I like it
being explicit, and partly because "global symbols with local linkage"
sounds like an oxymoron.
llvm-svn: 323688
This reverts commit r322917 due to multiple performance regressions in spec2006
and spec2017. XFAILed llvm/test/CodeGen/AArch64/big-callframe.ll which initially
motivated this change.
llvm-svn: 323683
Summary:
As discussed in D42244, we have difficulty describing the legality of some
operations. We're not able to specify relationships between types.
For example, declaring the following
setAction({..., 0, s32}, Legal)
setAction({..., 0, s64}, Legal)
setAction({..., 1, s32}, Legal)
setAction({..., 1, s64}, Legal)
currently declares these type combinations as legal:
{s32, s32}
{s64, s32}
{s32, s64}
{s64, s64}
but we currently have no means to say that, for example, {s64, s32} is
not legal. Some operations such as G_INSERT/G_EXTRACT/G_MERGE_VALUES/
G_UNMERGE_VALUES have relationships between the types that are currently
described incorrectly.
Additionally, G_LOAD/G_STORE currently have no means to legalize non-atomics
differently to atomics. The necessary information is in the MMO but we have no
way to use this in the legalizer. Similarly, there is currently no way for the
register type and the memory type to differ so there is no way to cleanly
represent extending-load/truncating-store in a way that can't be broken by
optimizers (resulting in illegal MIR).
It's also difficult to control the legalization strategy. We've added support
for legalizing non-power of 2 types but there's still some hardcoded assumptions
about the strategy. The main one I've noticed is that type0 is always legalized
before type1 which is not a good strategy for `type0 = G_EXTRACT type1, ...` if
you need to widen the container. It will converge on the same result eventually
but it will take a much longer route when legalizing type0 than if you legalize
type1 first.
Lastly, the definition of legality and the legalization strategy is kept
separate which is not ideal. It's helpful to be able to look at a one piece of
code and see both what is legal and the method the legalizer will use to make
illegal MIR more legal.
This patch adds a layer onto the LegalizerInfo (to be removed when all targets
have been migrated) which resolves all these issues.
Here are the rules for shift and division:
for (unsigned BinOp : {G_LSHR, G_ASHR, G_SDIV, G_UDIV})
getActionDefinitions(BinOp)
.legalFor({s32, s64}) // If type0 is s32/s64 then it's Legal
.clampScalar(0, s32, s64) // If type0 is <s32 then WidenScalar to s32
// If type0 is >s64 then NarrowScalar to s64
.widenScalarToPow2(0) // Round type0 scalars up to powers of 2
.unsupported(); // Otherwise, it's unsupported
This describes everything needed to both define legality and describe how to
make illegal things legal.
Here's an example of a complex rule:
getActionDefinitions(G_INSERT)
.unsupportedIf([=](const LegalityQuery &Query) {
// If type0 is smaller than type1 then it's unsupported
return Query.Types[0].getSizeInBits() <= Query.Types[1].getSizeInBits();
})
.legalIf([=](const LegalityQuery &Query) {
// If type0 is s32/s64/p0 and type1 is a power of 2 other than 2 or 4 then it's legal
// We don't need to worry about large type1's because unsupportedIf caught that.
const LLT &Ty0 = Query.Types[0];
const LLT &Ty1 = Query.Types[1];
if (Ty0 != s32 && Ty0 != s64 && Ty0 != p0)
return false;
return isPowerOf2_32(Ty1.getSizeInBits()) &&
(Ty1.getSizeInBits() == 1 || Ty1.getSizeInBits() >= 8);
})
.clampScalar(0, s32, s64)
.widenScalarToPow2(0)
.maxScalarIf(typeInSet(0, {s32}), 1, s16) // If type0 is s32 and type1 is bigger than s16 then NarrowScalar type1 to s16
.maxScalarIf(typeInSet(0, {s64}), 1, s32) // If type0 is s64 and type1 is bigger than s32 then NarrowScalar type1 to s32
.widenScalarToPow2(1) // Round type1 scalars up to powers of 2
.unsupported();
This uses a lambda to say that G_INSERT is unsupported when type0 is bigger than
type1 (in practice, this would be a default rule for G_INSERT). It also uses one
to describe the legal cases. This particular predicate is equivalent to:
.legalFor({{s32, s1}, {s32, s8}, {s32, s16}, {s64, s1}, {s64, s8}, {s64, s16}, {s64, s32}})
In terms of performance, I saw a slight (~6%) performance improvement when
AArch64 was around 30% ported but it's pretty much break even right now.
I'm going to take a look at constexpr as a means to reduce the initialization
cost.
Future work:
* Make it possible for opcodes to share rulesets. There's no need for
G_LSHR/G_ASHR/G_SDIV/G_UDIV to have separate rule and ruleset objects. There's
no technical barrier to this, it just hasn't been done yet.
* Replace the type-index numbers with an enum to get .clampScalar(Type0, s32, s64)
* Better names for things like .maxScalarIf() (clampMaxScalar?) and the vector rules.
* Improve initialization cost using constexpr
Possible future work:
* It's possible to make these rulesets change the MIR directly instead of
returning a description of how to change the MIR. This should remove a little
overhead caused by parsing the description and routing to the right code, but
the real motivation is that it removes the need for LegalizeAction::Custom.
With Custom removed, there's no longer a requirement that Custom legalization
change the opcode to something that's considered legal.
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar, volkan, reames, bogner
Reviewed By: bogner
Subscribers: hintonda, bogner, aemerson, mgorny, javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D42251
llvm-svn: 323681
We now test that pic and static produce different results for bar.
The function names were demangled.
The attributes are written inline.
llvm-svn: 323680
Summary:
Fix a few places that were modifying code after register
allocation to set the renamable bit correctly to avoid failing the
validation added in D42449.
llvm-svn: 323675
Summary:
The improvements to the LegalizerInfo discussed in D42244 require that
LegalizerInfo::LegalizeAction be available for use in other classes. As such,
it needs to be moved out of LegalizerInfo. This has been done separately to the
next patch to minimize the noise in that patch.
llvm-svn: 323669
Microsoft Visual Studio rejects the static constexpr static list of
atoms even though it's valid C++. This provides a workaround to unbreak
the bots.
llvm-svn: 323667
Summary:
If the same value is going to be vectorized several times in the same
tree entry, this entry is considered to be a gather entry and cost of
this gather is counter as cost of InsertElementInstrs for each gathered
value. But we can consider these elements as ShuffleInstr with
SK_PermuteSingle shuffle kind.
Reviewers: spatel, RKSimon, mkuper, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38697
llvm-svn: 323662
MSVC complains that the constexpr "expression did not evaluate to a
constant". Trying to make it happy by adding a `const` specifier as
suggested in https://stackoverflow.com/questions/37574343.
llvm-svn: 323659
This patch adds support for generating accelerator tables in dsymutil.
This feature was already present in our internal repository but not yet
upstreamed because it requires changes to the Apple accelerator table
implementation.
Differential revision: https://reviews.llvm.org/D42501
llvm-svn: 323655
This patch renames DwarfAccelTable.{h,cpp} to AccelTable.{h,cpp} and
moves the header to the include dir so it is accessible by the
dsymutil implementation.
Differential revision: https://reviews.llvm.org/D42529
llvm-svn: 323654
This patch refactors the way data is stored in the accelerator table and
makes them truly generic. There have been several attempts to do this in
the past:
- D8215 & D8216: Using a union and partial hardcoding.
- D11805: Using inheritance.
- D42246: Using a callback.
In the end I didn't like either of them, because for some reason or
another parts of it felt hacky or decreased runtime performance. I
didn't want to completely rewrite them as I was hoping that we could
reuse parts for the successor in the DWARF standard. However, it seems
less and less likely that there will be a lot of opportunities for
sharing code and/or an interface.
Originally I choose to template the whole class, because it introduces
no performance overhead compared to the original implementation.
We ended up settling on a hybrid between a templated method and a
virtual call to emit the data. The motivation is that we don't want to
increase code size for a feature that should soon be superseded by the
DWARFv5 accelerator tables. While the code will continue to be used for
compatibility, it won't be on the hot path. Furthermore this does not
regress performance compared to Apple's internal implementation that
already uses virtual calls for this.
A quick summary for why these changes are necessary: dsymutil likes to
reuse the current implementation of the Apple accelerator tables.
However, LLDB expects a slightly different interface than what is
currently emitted. Additionally, in dsymutil we only have offsets and no
actual DIEs.
Although the patch suggests a lot of code has changed, this change is
pretty straightforward:
- We created an abstract class `AppleAccelTableData` to serve as an
interface for the different data classes.
- We created two implementations of this class, one for type tables and
one for everything else. There will be a third one for dsymutil that
takes just the offset.
- We use the supplied class to deduct the atoms for the header which
makes the structure of the table fully self contained, although not
enforced by the interface as was the case for the fully templated
approach.
- We renamed the prefix from DWARF- to Apple- to make space for the
future implementation of .debug_names.
This change is NFC and relies on the existing tests.
Differential revision: https://reviews.llvm.org/D42334
llvm-svn: 323653
The test was failing because of an incorrect sizeof check in the name
index parsing code. This code was meant to check that we have enough
input to parse the fixed-size part of the dwarf header, which it did by
comparing the input to sizeof(Header). Originally struct Header only
contained the fixed-size part, but during review, we've moved additional
members into it, which rendered the sizeof check invalid.
I resolve this by moving the fixed-size part to a separate struct and
updating the sizeof-expression to use that.
llvm-svn: 323648
Summary:
All variants of isLogicalImm[Not](32|64) can be combined into a single templated function, same for printLogicalImm(32|64).
By making it use a template instead, further SVE patches can use it for other data types as well (e.g. 8, 16 bits).
Reviewers: fhahn, rengolin, aadg, echristo, kristof.beyls, samparker
Reviewed By: samparker
Subscribers: aemerson, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D42294
llvm-svn: 323646
Summary:
When emitting the location for a global variable with fragmented debug
expressions, make sure that the offset pieces, which represent
optimized-out parts of the variable, are emitted before their succeeding
fragments' expressions. Previously, if the succeeding fragment's
location was a symbol, the offset piece was emitted after, rather than
before, that symbol's expression. This effectively meant that the symbols
were associated with the wrong parts of the variable.
This fixes PR36085.
Patch by: David Stenberg
Reviewers: aprantl, probinson, dblaikie
Reviewed By: aprantl
Subscribers: JDevlieghere, llvm-commits
Tags: #debug-info
Differential Revision: https://reviews.llvm.org/D42527
llvm-svn: 323644
Summary: This was broken long ago in D12208, which failed to account for
the fact that 64-bit SPARC uses a stack bias of 2047, and it is the
*unbiased* value which should be aligned, not the biased one. This was
seen to be an issue with Rust.
Patch by: jrtc27 (James Clarke)
Reviewers: jyknight, venkatra
Reviewed By: jyknight
Subscribers: jacob_hansen, JDevlieghere, fhahn, fedor.sergeev, llvm-commits
Differential Revision: https://reviews.llvm.org/D39425
llvm-svn: 323643
The call to ScopedPrinter::printNumber with size_t argument was
ambiguous (I think) on 32-bit builds. Explicitly cast to a 64-bit int to
avoid this.
llvm-svn: 323642
Summary:
This modifies the dwarfdump output to align it with the new .debug_names
dump. It also renames two header fields to match similar fields in the
dwarf5 header.
A couple of tests needed to be updated to match new output. The changes
were fairly straight-forward, although not really automatable.
Reviewers: JDevlieghere, aprantl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42415
llvm-svn: 323641
Create and use FP16Pat FullFP16Pat helper patterns to make the difference
explicit.
Differential Revision: https://reviews.llvm.org/D42634
llvm-svn: 323640
Summary:
This commit renames DWARFAcceleratorTable to AppleAcceleratorTable to free up
the first name as an interface for the different accelerator tables.
Then I add a DWARFDebugNames class for the dwarf5 table.
Presently, the only common functionality of the two classes is the dump()
method, because this is the only method that was necessary to implement
dwarfdump -debug-names; and because the rest of the
AppleAcceleratorTable interface does not directly transfer to the dwarf5
tables (the main reason for that is that the present interface assumes
the tables are homogeneous, but the dwarf5 tables can have different
keys associated with each entry).
I expect to make the common interface richer as I add more functionality
to the new class (and invent a way to represent it in generic way).
In terms of sharing the implementation, I found the format of the two
tables sufficiently different to frustrate any attempts to have common
parsing or dumping code, so presently the implementations share just low
level code for formatting dwarf constants.
Reviewers: vleschuk, JDevlieghere, clayborg, aprantl, probinson, echristo, dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42297
llvm-svn: 323638
The Large System Extension added an atomic compare-and-swap instruction
that operates on a pair of 64-bit registers, which we can use to
implement a 128-bit cmpxchg.
Because i128 is not a legal type for AArch64 we have to do all of the
instruction selection in C++, and the instruction requires even/odd
register pairs, so we have to wrap it in REG_SEQUENCE and EXTRACT_SUBREG
nodes. This is very similar to what we do for 64-bit cmpxchg in the ARM
backend.
Differential revision: https://reviews.llvm.org/D42104
llvm-svn: 323634
Summary:
There's a check in the code to only check getSetCCResultType after LegalOperations or if the type is MVT::i1. But the i1 check is only allowing scalar types through. I think it should check that the scalar type is MVT::i1 so that it will work for vectors.
The changed test already does this combine with AVX512VL where getSetCCResultType returns vXi1. But with avx512f and no VLX getSetCCResultType returns a type matching the width of the input type.
Reviewers: spatel, RKSimon
Reviewed By: spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42619
llvm-svn: 323631
This pretty much reverts r322006, except that we keep the test,
because we work around the issue exposed in a different way (a
recursion limit in value tracking). There's still probably some
sequence that exposes this problem, and the proper way to fix that
for somebody who has time is outlined in the code review.
llvm-svn: 323630
This prevents functions accessing varargs from being inlined if they
have the alwaysinline attribute.
Reviewers: efriedma, rnk, davide
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D42556
llvm-svn: 323619
This patch moves the DJB hash to support. This is consistent with other
hashing algorithms living there. The hash is used by the DWARF
accelerator tables. We're doing this now because the hashing function is
needed by dsymutil and we don't want to link against libBinaryFormat.
Differential revision: https://reviews.llvm.org/D42594
llvm-svn: 323616
We can use the same input for both operands to get a free compare with zero.
We already use this trick in a couple places where we explicitly create PTESTM with the same input twice. This generalizes it.
I'm hoping to remove the ISD opcodes and move this to isel patterns like we do for scalar cmp/test.
llvm-svn: 323605
Legalization is still biased to turn LT compares in to GT by swapping operands to avoid needing extra isel patterns to commute.
I'm hoping to remove TESTM/TESTNM next and this should simplify that by making EQ/NE more similar.
llvm-svn: 323604
If broadcasting from another shuffle, attempt to simplify it.
We can probably generalize this a lot more (embedding in combineX86ShufflesRecursively), but BROADCAST is one of the more troublesome as it accepts inputs of different sizes to the result.
llvm-svn: 323602
Summary:
This change is step two in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments.
Step 3) Update Clang to use the new IRBuilder API.
Step 4) Update Polly to use the new IRBuilder API.
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use
getDestAlignment() and getSourceAlignment() instead.
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.
Reference
http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.htmlhttp://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html
llvm-svn: 323597
The code was using getValueSizeInBits and combining with the result of a call to DAG.ComputeNumSignBits. But for vector types getValueSizeInBits returns the width of the full vector while ComputeNumSignBits is going to give a number no larger than the width of a single element. So we should be using getScalarValueSizeInBits to get the element width.
llvm-svn: 323583
We weren't converting the immediate ConstantFP during legalization, which caused
the wrong bit patterns to be emitted for half type FP constants.
Fixes PR36106.
llvm-svn: 323582
Previously we had to materialize all 1s in a register using vpternlog or pcmpeq and then xor with that. By using vpternlog directly we can do it in one operation.
This is implemented using isel patterns, but we should maybe consider creating a generalized vpternlog combiner.
llvm-svn: 323572
A cast from A to B is eliminable if its result is casted to C, and if
the pair of casts could just be expressed as a single cast. E.g here,
%c1 is eliminable:
%c1 = zext i16 %A to i32
%c2 = sext i32 %c1 to i64
InstCombine optimizes away eliminable casts. This patch teaches it to
insert a dbg.value intrinsic pointing to the final result, so that local
variables pointing to the eliminable result are preserved.
Differential Revision: https://reviews.llvm.org/D42566
llvm-svn: 323570
A correctly aligned address may happen to be separated into a variable
part and a constant part, where the constant part does not match the
alignment needed in a load/store that uses this address. Such a constant
cannot be used as an immediate offset in an indexed instruction.
When lowering a global address, make sure that if there is an offset
folded into the global, the offset is valid for all uses in load/store
instructions.
llvm-svn: 323562
One common source of blocks with no successors is calls to noreturn
functions; we want to preserve pristine registers in case they throw an
exception.
The whole pristine register thing is messy (we should really prefer to
explicitly model registers), but this fills a hole in the model for now.
Fixes https://bugs.llvm.org/show_bug.cgi?id=36073.
Differential Revision: https://reviews.llvm.org/D42509
llvm-svn: 323559
X86ISelLowering.cpp:34130:5: error: return type 'llvm::SDValue' must
match previous return type 'const llvm::SDValue' when lambda expression
has unspecified explicit return type
llvm-svn: 323557
Previously some targets printed their own message at the start of Select to indicate what they were selecting. For the targets that didn't, it means there was no print of the root node before any custom handling in the target executed. So if the target did something custom and never called SelectNodeCommon, no print would be made. For the targets that did print a message in Select, if they didn't custom handle a node SelectNodeCommon would reprint the root node before walking the isel table.
It seems better to just print the message before the call to Select so all targets behave the same. And then remove the root node printing from SelectNodeCommon and just leave a message that says we're starting the table search.
There were also some oddities in blank line behavior. Usually due to a \n after a call to SelectionDAGNode::dump which already inserted a new line.
llvm-svn: 323551
Summary: This is the producer side for DWARF v5 string offsets tables. The reader/consumer
side was committed with r321295. All compile and type units in a module share a
contribution to the string offsets table. Indirect strings use the strx{1,2,3,4} index forms.
Reviewers: dblaikie, aprantl, JDevliegehere
Differential Revision: https://reviews.llvm.org/D42021
llvm-svn: 323546
Summary: This is a simple change to test commit access with.
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42586
llvm-svn: 323544
We currently coalesce v4i32 extracts from all 4 elements to 2 v2i64 extracts + shifts/sign-extends.
This seems to have been added back in the days when we tended to spill vectors and reload scalars, or ended up with repeated shuffles moving everything down to 0'th index. I don't think either of these are likely these days as we have better EXTRACT_VECTOR_ELT and VECTOR_SHUFFLE handling, and the existing code tends to make it very difficult for various vector and load combines.
Differential Revision: https://reviews.llvm.org/D42308
llvm-svn: 323541
Summary:
If the same value is going to be vectorized several times in the same
tree entry, this entry is considered to be a gather entry and cost of
this gather is counter as cost of InsertElementInstrs for each gathered
value. But we can consider these elements as ShuffleInstr with
SK_PermuteSingle shuffle kind.
Reviewers: spatel, RKSimon, mkuper, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38697
llvm-svn: 323530
Add support for printing / parsing the addrspace of a MachineMemOperand.
Fixes PR35970.
Differential Revision: https://reviews.llvm.org/D42502
llvm-svn: 323521
- using qualified pointer addrspace in intrinsics class to avoid .f32 mangling
- changed too common atomic mangling to ds
- added missing intrinsics to AMDGPUTTIImpl::getTgtMemIntrinsic
Reviewed by: b-sumner
Differential Revision: https://reviews.llvm.org/D42383
llvm-svn: 323516
load instruction
The function `Thumb1InstrInfo::loadRegFromStackSlot` accepts only the `tGPR`
register class. The function serves to emit a `tLDRspi` instruction and
certainly any subset of the `tGPR` register class is a valid destination of the
load.
Differential revision: https://reviews.llvm.org/D42535
llvm-svn: 323514
This is the groundwork for Armv8.2-A FP16 code generation .
Clang passes and returns _Float16 values as floats, together with the required
bitconverts and truncs etc. to implement correct AAPCS behaviour, see D42318.
We will implement half-precision argument passing/returning lowering in the ARM
backend soon, but for now this means that this:
_Float16 sub(_Float16 a, _Float16 b) {
return a + b;
}
gets lowered to this:
define float @sub(float %a.coerce, float %b.coerce) {
entry:
%0 = bitcast float %a.coerce to i32
%tmp.0.extract.trunc = trunc i32 %0 to i16
%1 = bitcast i16 %tmp.0.extract.trunc to half
<SNIP>
%add = fadd half %1, %3
<SNIP>
}
When FullFP16 is *not* supported, we don't make f16 a legal type, and we get
legalization for "free", i.e. nothing changes and everything works as before.
And also f16 argument passing/returning is handled.
When FullFP16 is supported, we do make f16 a legal type, and have 2 places that
we need to patch up: f16 argument passing and returning, which involves minor
tweaks to avoid unnecessary code generation for some bitcasts.
As a "demonstrator" that this works for the different FP16, FullFP16, softfp
modes, etc., I've added match rules to the VSUB instruction description showing
that we can codegen this instruction from IR, but more importantly, also to
some conversion instructions. These conversions were causing issue before in
the FP16 and FullFP16 cases.
I've also added match rules to the VLDRH and VSTRH desriptions, so that we can
actually compile the entire half-precision sub code example above. This showed
that these loads and stores had the wrong addressing mode specified: AddrMode5
instead of AddrMode5FP16, which turned out not be implemented at all, so that
has also been added.
This is the minimal patch that shows all the different moving parts. In patch
2/3 I will add some efficient lowering of bitcasts, and in 2/3 I will add the
remaining Armv8.2-A FP16 instruction descriptions.
Thanks to Sam Parker and Oliver Stannard for their help and reviews!
Differential Revision: https://reviews.llvm.org/D38315
llvm-svn: 323512
Type legalization would prevent any i64 operands to the build_vector from existing before we get here. The coverage bots show this code as uncovered.
llvm-svn: 323506
The original autoupgrade for kunpck intrinsics used a bitcasted scalar shift, or, and. This combine would turn this into a concat_vectors. Now the kunpck intrinsics are autoupgraded to a vector shuffle that will become a concat_vectors.
llvm-svn: 323504
This listed all legal 128-bit integer types individually, but since we already know we have a legal type and its integer, we can just check is128BitVector.
llvm-svn: 323502
When pass creates a MOV instruction for
lea (%base,%index,1), %dst => mov %base,%dst; add %index,%dst
modification it should clean the killed flag for base
if base is equal to index.
Otherwise verifier complains about usage of killed register in add instruction.
Reviewers: lsaba, zvi, zansari, aaboud
Reviewed By: lsaba
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42522
llvm-svn: 323497
Tests were working on my system because the old correct files were left over
and the new bug was that the output files were not being output at all.
Consequently the test work on my system but fail on any other system.
This reverts commit r323484.
llvm-svn: 323486
Inserting a dbg.value instruction at the start of a basic block with a
landingpad instruction triggers a verifier failure. We should be OK if
we insert the instruction a bit later.
Speculative fix for the bot failure described here:
https://reviews.llvm.org/D42551
llvm-svn: 323482
Move standard forms from a switch statement to the table of forms;
fill in all the missing ones defined in DWARF v5. I'm guessing at
classifications in a couple of cases where v5 forms aren't actually
supported yet, but whoever adds support for the forms can fix the
classifications as needed.
llvm-svn: 323481
While writing code for input and output formats in llvm-objcopy it became
apparent that there was a code health problem. This change attempts to solve
that problem by refactoring the code to use Reader and Writer objects that can
read in different objects in different formats, convert them to a single shared
internal representation, and then write them to any other representation.
New classes:
Reader: the base class used to construct instances of the internal
representation
Writer: the base class used to write out instances of the internal
representation
ELFBuilder: a helper class for ELFWriter that takes an ELFFile and converts it
to a Object
SectionVisitor: it became necessary to remove writeSection from SectionBase
because, under the new Reader/Writer scheme, it's possible to convert between
ELF Types such as ELF32LE and ELF32BE. This isn't possible with writeSection
because it (dynamically) depends on the underlying section type *and*
(statically) depends on the ELF type. Bad things would happen if the underlying
sections for ELF32LE were used for writing to ELF64BE. To avoid this code smell
(which would have compiled, run, and output some nonsesnse) I decoupled writing
of sections from a class.
SectionWriter: This is just the ELFT templated implementation of
SectionVisitor. Many classes now have this class as a friend so that the
writing methods in this class can write out private data.
ELFWriter: This is the Writer that outputs to ELF
BinaryWriter: This is the Writer that outputs to Binary
ElfType: Because the ELF Type is not a part of the Object anymore we need a way
to construct the correct default Writer based on properties of the Reader. This
enum just keeps track of the ELF type of the input so it can be used as the
default output type as well.
Object has correspondingly undergone some serious changes as well. It now has
more generic methods for building and manipulating ELF binaries. This interface
makes ELFBuilder easy enough to use and will make the BinaryReader/Builder easy
to create as well. Most changes in this diff are cosmetic and deal with the
fact that a method has been moved from one class to another or a change from a
pointer to a reference. Almost no changes should result in a functional
difference (this is after all a refactor). One minor functional change was made
and the result can be seen in remove-shstrtab-error.test. The fact that it
fails hasn't changed but the error message has changed because that failure is
detected at a later point in the code now (because WriteSectionHeaders is a
property of the ElfWriter *not* a property of the Object). I'd say roughly
80-90% of this code is cosmetically different, 10-19% is different but
functionally the same, and 1-5% is functionally different despite not causing a
change in tests.
Differential Revision: https://reviews.llvm.org/D42222
llvm-svn: 323480
This form is like DW_FORM_strp, but points to .debug_line_str instead
of .debug_str as the string section. It's intended to be used from
the line-table header, and allows string-pooling of directory and
filenames across compilation units.
Differential Revision: https://reviews.llvm.org/D42553
llvm-svn: 323476
Summary:
The intent of this is to allow the code to be used with ThinLTO. In
Thinlink phase, a traditional Callgraph can not be computed even though
all the necessary information (nodes and edges of a call graph) is
available. This is due to the fact that CallGraph class is closely tied
to the IR. This patch first extends GraphTraits to add a CallGraphTraits
graph. This is then used to implement a version of counts propagation
on a generic callgraph.
Reviewers: davidxl
Subscribers: mehdi_amini, tejohnson, llvm-commits
Differential Revision: https://reviews.llvm.org/D42311
llvm-svn: 323475
This patch enables aggressive FMA by default on T99, and provides a -mllvm
option to enable the same on other AArch64 micro-arch's (-mllvm
-aarch64-enable-aggressive-fma).
Test case demonstrating the effects on T99 is included.
Patch by: steleman (Stefan Teleman)
Differential Revision: https://reviews.llvm.org/D40696
llvm-svn: 323474
This patch is an enhancement to propagate dbg.value information when
Phis are created on behalf of LCSSA. I noticed a case where a value
carried across a loop was reported as <optimized out>.
Specifically this case:
int bar(int x, int y) {
return x + y;
}
int foo(int size) {
int val = 0;
for (int i = 0; i < size; ++i) {
val = bar(val, i); // Both val and i are correct
}
return val; // <optimized out>
}
In the above case, after all of the interesting computation completes
our value is reported as "optimized out." This change will add a
dbg.value to correct this.
This patch also moves the dbg.value insertion routine from
LoopRotation.cpp into Local.cpp, so that we can share it in both places
(LoopRotation and LCSSA).
Patch by Matt Davis!
Differential Revision: https://reviews.llvm.org/D42551
llvm-svn: 323472
Right now clang uses "_n" suffix for some user space callbacks and "N" for the matching kernel ones. There's no need for this and it actually breaks kernel build with inline instrumentation. Use the same callback names for user space and the kernel (and also make them consistent with the names GCC uses).
Patch by Andrey Konovalov.
Differential Revision: https://reviews.llvm.org/D42423
llvm-svn: 323470
The asm parser puts the lock prefix in the MCInst flags so we need to check that in addition to TSFlags. This matches what the ATT printer does.
llvm-svn: 323469
It was reverted after buildbot regressions.
Original commit message:
This allows relative block frequency of call edges to be passed
to the thinlink stage where it will be used to compute synthetic
entry counts of functions.
llvm-svn: 323460
It looks like this hasn't been updated since bugzilla moved.
Patch by Colden Cullen.
Differential Revision: https://reviews.llvm.org/D42496
llvm-svn: 323457
This brings it in line with std::optional. My recent changes to
make Optional of trivial types trivially copyable introduced
diverging behavior depending on the type, which is bad. Now all
types have the same moving behavior.
llvm-svn: 323445
Summary:
If the same value is going to be vectorized several times in the same
tree entry, this entry is considered to be a gather entry and cost of
this gather is counter as cost of InsertElementInstrs for each gathered
value. But we can consider these elements as ShuffleInstr with
SK_PermuteSingle shuffle kind.
Reviewers: spatel, RKSimon, mkuper, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38697
llvm-svn: 323441
This is guarded by shouldChangeType(), so the tests show that
we don't do the fold if the narrower type is not legal. Note
that there is a proposal (D42424) that would change the results
for the specific cases shown in these tests. That difference is
also discussed in PR35792:
https://bugs.llvm.org/show_bug.cgi?id=35792
Alive proofs for the cases handled here as well as the bitwise
logic binops that we should already do better on:
https://rise4fun.com/Alive/c97https://rise4fun.com/Alive/Lc5Ehttps://rise4fun.com/Alive/kdf
llvm-svn: 323437
Summary:
If the same value is going to be vectorized several times in the same
tree entry, this entry is considered to be a gather entry and cost of
this gather is counter as cost of InsertElementInstrs for each gathered
value. But we can consider these elements as ShuffleInstr with
SK_PermuteSingle shuffle kind.
Reviewers: spatel, RKSimon, mkuper, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38697
llvm-svn: 323430
computeDeadSymbols accessed isLive() which was not public
before. It does not make much sence to keep isLive() private
because flags are available via flags() public member anyways.
llvm-svn: 323415
This patch extends the atom types used by the Apple accelerator tables
with two dsymutil extensions:
- DW_ATOM_type_type_flags
- DW_ATOM_qual_name_hash
llvm-svn: 323414
Summary:
When creating the debug fragments for a SRA'd struct, use the fields'
offsets, taken from the struct layout, as the offsets for the resulting
fragments. This fixes an issue where GlobalOpt would emit fragments with
incorrect offsets for padded fields.
This should solve PR36016.
Patch by David Stenberg.
Reviewers: aprantl
Reviewed By: aprantl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42489
llvm-svn: 323411
The regular expressions and the imul names caused some instructions to be matched by multiple regexs creating unpredictable results.
This changes them all to use explicit instrs instead.
While doing this I also found that some instructions in Skylake were missing load latency so I fixed that too.
llvm-svn: 323406
The IMUL instruction names mixed with the prefix matching of the instregex lead to some strange matches. The worst being that several memory instructions are using the register form latency.
I don't know what the right answer is, so I've left TODOs and will try to work with the AMD folks to get this cleaned up.
llvm-svn: 323405
Set cmake policy CMP0068=NEW, if available, and set
"CMAKE_BUILD_WITH_INSTALL_NAME_DIR=On" globally to
maintain current behavior.
This is needed to suppress warnings on OSX starting with cmake version
3.9.6.
Differential Revision: https://reviews.llvm.org/D42463
llvm-svn: 323404
MMX instrutions all start with MMX_ so the 64 isn't needed for disambigutation.
SSE/AVX1 instructions are assumed 128-bit so we don't need to say 128.
AVX2 instructions should use a Y to indicate 256-bits.
llvm-svn: 323402
These were treated as optional suffixes, but the regular expressions are already prefix matches so this is unnecessary. It breaks the binary search optimization in tablegen due to the top level question mark.
llvm-svn: 323401
first argument.
This makes lookupFlags more consistent with lookup (which takes the query as the
first argument) and composes better in practice, since lookups are usually
linearly chained: Each lookupFlags can populate the result map based on the
symbols not found in the previous lookup. (If the maps were returned rather than
passed by reference there would have to be a merge step at the end).
llvm-svn: 323398
https://reviews.llvm.org/D41373
The various components are
GICombinerHelper contains transformations that are common to all
targets. Targets can pick and choose which transformations (at
function/opcode granularity) each pass uses via configuring a
GICombinerInfo.
GICombiner contains some common code and it does the traversal,
driving of combines, worklist management and iterating until
convergence.
GICombinerInfo is an interface with a virtual method called combine.
The combiner info will allow targets to pick and choose (or
implement their own specific combines). CombineInfos can make
use of available combines in GICombineHelper to configure the
transformations for a particular pass. Currently this approach allows
cherry picking transformations from helpers (at function/opcode
granularity) and also allows early returning on specific
transformations. Targets also get to prioritize whether target specific
combines run before/after the opt-in generic combines. Ideally we would
like this part to be configured by both C++ and Tablegen. The
CombinerInfo also has a field which indicates how to deal with
IllegalOps (ie - should we allow to create them/or legalize them?).
A CombinerPass would configure a CombinerInfo, create the GICombiner
with the Info, and call
GICombiner::combineMachineInstrs(MachineFunction&).
This organization is very similar to the GISelLegalizer.
llvm-svn: 323392
Collected statistics for the number of patterns emitted can be
incorrect because rules can be grouped if OptimizeMatchTable
is enabled. Increase the counter in RuleMatcher::emit(...)
to avoid that.
llvm-svn: 323391
functions/methods that return JITSymbols.
lookupFlagsWithLegacyFn takes a SymbolNameSet and a legacy lookup function and
returns a LookupFlagsResult. It uses the legacy lookup function to search for
each symbol. If found, getFlags is called on the symbol and the flags added to
the SymbolFlags map. If not found, the symbol is added to the SymbolsNotFound
set.
lookupWithLegacyFn takes an AsynchronousSymbolQuery, a SymbolNameSet and a
legacy lookup function. Each symbol in the SymbolNameSet is searched for via the
legacy lookup function. If it is found, its getAddress function is called
(triggering materialization if it has not happened already) and the resulting
mapping stored in the query. If it is not found the symbol is added to the
unresolved symbols set which is returned at the end of the function. If an
error occurs during legacy lookup or materialization it is passed to the
query via setFailed and the function returns immediately.
llvm-svn: 323388
This is a bit of a hack, but removes a cycle that broke modular builds
of LLVM. Of course the cycle is still there in form of a dependency
on the .def file.
llvm-svn: 323383
The only part of the datalayout that should matter for these tests
is the part that specifies the legal int widths ('n*'). But there
was a bug - that part of the string was not correctly separated with
the expected '-' character, so we were testing as if there were no
legal int widths at all. Removed the leading cruft so we have some
legal ints to test with.
I noticed this while testing a potential change to the way we
transform shifts and sexts in D42424.
llvm-svn: 323377
This patch adds a LambdaSymbolResolver convenience utility that can create an
orc::SymbolResolver from a pair of function objects that supply the behavior for
the lookupFlags and lookup methods.
This class plays the same role for orc::SymbolResolver as the legacy
LambdaResolver class plays for LegacyJITSymbolResolver, and will replace the
latter class once all ORC APIs are migrated to orc::SymbolResolver.
This patch also adds some documentation for the orc::SymbolResolver class as
this was left out of the original commit.
llvm-svn: 323375
The code in EmitFunctionEntryCode needs to know the maximum stack
alignment, but it runs very early in the selection process (before
lowering). The final stack alignment may change during lowering, so
the code needs to be moved to where the alignment is known.
llvm-svn: 323374
The tablegen imported patterns for sext(load(a)) don't check for single uses
of the load or delete the original after matching. As a result two loads are
left in the generated code. This particular issue will be fixed by adding
support for a G_SEXTLOAD opcode in future.
There are however other potential issues around this that wouldn't be fixed by
a G_SEXTLOAD, so until we have a proper solution we don't try to handle volatile
loads at all in the AArch64 selector.
Fixes/works around PR36018.
llvm-svn: 323371
Apparently checking the pass structure isn't enough to ensure that we don't fall
back to FastISel, as it's set up as part of the SelectionDAGISel.
llvm-svn: 323369
As discussed in D41484, PMADDWD for 'zero extended' vXi32 is nearly always a better option than PMULLD:
On SNB it will result in code that isn't any faster, but not any slower so we may as well keep it.
On KNL it only has half the throughput, so I've disabled it on there - ideally there'd be a better way than this.
Differential Revision: https://reviews.llvm.org/D42258
llvm-svn: 323367
Summary:
Move reserveRegisterTuples into AMDGPURegisterInfo and use it in
R600RegisterInfo::getReservedRegs and
R600InstrInfo::reserveIndirectRegisters to ensure that all super
registers of reserved registers are also marked as reserved.
Before this change, under certain circumstances, the registers %t1_x and
%t1_xyzw would be marked as reserved, but %t1_xy and %t1_xyz would not
be, leading to the register allocator sometimes assigning a register to
%t1_xy, which is invalid since %t1_x is reserved.
Reviewers: arsenm, tstellar, MatzeB, qcolombet
Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D42448
llvm-svn: 323356
It causes regressions in various OpenGL test suites.
Keep the test cases introduced by r321751 as XFAIL, and add a test case
for the regression.
Change-Id: I90b4cc354f68cebe5fcef1f2422dc8fe1c6d3514
Bugzilla: https://bugs.llvm.org/show_bug.cgi?id=36015
llvm-svn: 323355
Summary: For long shifts, the inlined version takes about 20 instructions on Thumb1. To avoid the code bloat, expand to __aeabi_ calls if target is Thumb1.
Reviewers: samparker
Reviewed By: samparker
Subscribers: samparker, aemerson, javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D42401
llvm-svn: 323354
The regexs are treated as a prefix match already so the checking for optional text at the end provides no value. Instead it prevents the binary search optimization in tablegen from kicking in due to the top level question mark.
llvm-svn: 323351
Summary:
This allows relative block frequency of call edges to be passed to the
thinlink stage where it will be used to compute synthetic entry counts
of functions.
Reviewers: tejohnson, pcc
Subscribers: mehdi_amini, llvm-commits, inglorion
Differential Revision: https://reviews.llvm.org/D42212
llvm-svn: 323349
Summary:
If the same value is going to be vectorized several times in the same
tree entry, this entry is considered to be a gather entry and cost of
this gather is counter as cost of InsertElementInstrs for each gathered
value. But we can consider these elements as ShuffleInstr with
SK_PermuteSingle shuffle kind.
Reviewers: spatel, RKSimon, mkuper, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38697
llvm-svn: 323348
Summary:
If any vector divisor element is undef, we can arbitrarily choose it be
zero which would make the div/rem an undef value by definition.
Reviewers: spatel, reames
Reviewed By: spatel
Subscribers: magabari, llvm-commits
Differential Revision: https://reviews.llvm.org/D42485
llvm-svn: 323343
Summary:
`getAction(const InstrAspect &) const` breaks encapsulation by exposing
the smaller components that are used to decide how to legalize an
instruction.
This is a problem because we need to change the implementation of
LegalizerInfo so that it's able to describe particular type combinations
rather than just cartesian products of types.
For example, declaring the following
setAction({..., 0, s32}, Legal)
setAction({..., 0, s64}, Legal)
setAction({..., 1, s32}, Legal)
setAction({..., 1, s64}, Legal)
currently declares these type combinations as legal:
{s32, s32}
{s64, s32}
{s32, s64}
{s64, s64}
but we currently have no means to say that, for example, {s64, s32} is
not legal. Some operations such as G_INSERT/G_EXTRACT/G_MERGE_VALUES/
G_UNMERGE_VALUES has relationships between the types that are currently
described incorrectly.
Additionally, G_LOAD/G_STORE currently have no means to legalize non-atomics
differently to atomics. The necessary information is in the MMO but we have no
way to use this in the legalizer. Similarly, there is currently no way for the
register type and the memory type to differ so there is no way to cleanly
represent extending-load/truncating-store in a way that can't be broken by
optimizers (resulting in illegal MIR).
This patch introduces LegalityQuery which provides all the information
needed by the legalizer to make a decision on whether something is legal
and how to legalize it.
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar, volkan, reames, bogner
Reviewed By: bogner
Subscribers: bogner, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D42244
llvm-svn: 323342
This allows us to specify the magic number for the DJB hash function.
This feature is needed by dsymutil to emit Apple types accelerator
table.
llvm-svn: 323341
This is needed in order to use our StringPool entries in the Apple
accelerator tables.
As this is NFC we rely on the existing tests for correctness.
llvm-svn: 323339
We're getting bug reports:
https://bugs.llvm.org/show_bug.cgi?id=35807https://bugs.llvm.org/show_bug.cgi?id=35840https://bugs.llvm.org/show_bug.cgi?id=36045
...where we blow up the stack in value tracking because other passes are sending
in selects that have an operand that is itself the select.
We don't currently have a reliable way to avoid analyzing dead code that may take
non-standard forms, so bail out when things go too far.
This mimics the recursion depth limitations in other parts of value tracking.
Unfortunately, this pushes the underlying problems for other passes (jump-threading,
simplifycfg, correlated-propagation) into hiding. If someone wants to uncover those
again, the first draft of this patch on Phab would do that (it would assert rather
than bail out).
Differential Revision: https://reviews.llvm.org/D42442
llvm-svn: 323331
Summary:
Loads/stores of some NEON vector types are promoted to other vector
types with different lane sizes but same vector size. This is not a
problem in little-endian but, when in big-endian, it requires
additional byte reversals required to preserve the lane ordering
while keeping the right endianness of the data inside each lane.
For example:
%1 = load <4 x half>, <4 x half>* %p
results in the following assembly:
ld1 { v0.2s }, [x1]
rev32 v0.4h, v0.4h
This patch changes the promotion of these loads/stores so that the
actual vector load/store (LD1/ST1) takes care of the endianness
correctly and there is no need for further byte reversals. The
previous code now results in the following assembly:
ld1 { v0.4h }, [x1]
Reviewers: olista01, SjoerdMeijer, efriedma
Reviewed By: efriedma
Subscribers: aemerson, rengolin, javed.absar, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D42235
llvm-svn: 323325
Summary:
This patch implements the codegen of DWARF debug info for non-constant
'count' fields for DISubrange.
This is patch [2/3] in a series to extend LLVM's DISubrange Metadata
node to support debugging of C99 variable length arrays and vectors with
runtime length like the Scalable Vector Extension for AArch64. It is
also a first step towards representing more complex cases like arrays
in Fortran.
Reviewers: echristo, pcc, aprantl, dexonsmith, clayborg, kristof.beyls, dblaikie
Reviewed By: aprantl
Subscribers: fhahn, aemerson, rengolin, JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D41696
llvm-svn: 323323
Combine expression patterns to form expressions with fewer, simple instructions.
This pass does not modify the CFG.
For example, this pass reduce width of expressions post-dominated by TruncInst
into smaller width when applicable.
It differs from instcombine pass in that it contains pattern optimization that
requires higher complexity than the O(1), thus, it should run fewer times than
instcombine pass.
Differential Revision: https://reviews.llvm.org/D38313
llvm-svn: 323321
Summary:
This patch extends the DISubrange 'count' field to take either a
(signed) constant integer value or a reference to a DILocalVariable
or DIGlobalVariable.
This is patch [1/3] in a series to extend LLVM's DISubrange Metadata
node to support debugging of C99 variable length arrays and vectors with
runtime length like the Scalable Vector Extension for AArch64. It is
also a first step towards representing more complex cases like arrays
in Fortran.
Reviewers: echristo, pcc, aprantl, dexonsmith, clayborg, kristof.beyls, dblaikie
Reviewed By: aprantl
Subscribers: rnk, probinson, fhahn, aemerson, rengolin, JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D41695
llvm-svn: 323313
For the included test case, the DAG transformation
concat_vectors(scalar, undef) -> scalar_to_vector(sclr)
would attempt to create a v2i32 vector for a v9i8
concat_vector. Bail out to avoid creating a bitcast with
mismatching sizes later on.
Differential Revision: https://reviews.llvm.org/D42379
llvm-svn: 323312
Summary:
This is the first attempt to write down a guideline on adding exception handling support for a target. The content basically bases on the discussion on [1]. If you guys know who is exception handling expert, please add him as the reviewer. Thanks.
[1] http://lists.llvm.org/pipermail/llvm-dev/2018-January/120405.html
Reviewers: t.p.northover, theraven, nemanjai
Reviewed By: theraven
Subscribers: sdardis, llvm-commits
Differential Revision: https://reviews.llvm.org/D42178
llvm-svn: 323311
This patch removes assert that SCEV is able to prove that a value is
non-negative. In fact, SCEV can sometimes be unable to do this because
its cache does not update properly. This assert will be returned once this
problem is resolved.
llvm-svn: 323309
This matches what MSVC does for alloca() function calls on ARM.
Even if MSVC doesn't support VLAs at the language level, it does
support the alloca function.
On the clang level, both the _alloca() (when emulating MSVC, which is
what the alloca() function expands to) and __builtin_alloca() builtin
functions, and VLAs, map to the same LLVM IR "alloca" function - so
within LLVM they're not distinguishable from each other.
Differential Revision: https://reviews.llvm.org/D42292
llvm-svn: 323308
Merging such globals loses the dllexport attribute. Add a test
to check that normal globals still are merged.
Differential Revision: https://reviews.llvm.org/D42127
llvm-svn: 323307
I think these instructions used to be named differently and the regular expression reflected that. I guess we must have correct itinerary information that made this not matter for the scheduler test?
llvm-svn: 323305
There are a couple tricky things with this patch.
I had to add an override of isVectorLoadExtDesirable to stop DAG combine from combining sign_extend with loads after legalization since we legalize sextload using a load+sign_extend. Overriding this hook actually prevents a lot sextloads from being created in the first place.
I also had to add isel patterns because DAG combine blindly combines sign_extend+truncate to a smaller sign_extend which defeats what legalization was trying to do.
Differential Revision: https://reviews.llvm.org/D42407
llvm-svn: 323301
Summary:
Currently, there are 2 ways to verify a DomTree:
* `DT.verify()` -- runs full tree verification and checks all the properties and gives a reason why the tree is incorrect. This is run by when EXPENSIVE_CHECKS are enabled or when `-verify-dom-info` flag is set.
* `DT.verifyDominatorTree()` -- constructs a fresh tree and compares it against the old one. This does not check any other tree properties (DFS number, levels), nor ensures that the construction algorithm is correct. Used by some passes inside assertions.
This patch introduces DomTree verification levels, that try to close the gape between the two ways of checking trees by introducing 3 verification levels:
- Full -- checks all properties, but can be slow (O(N^3)). Used when manually requested (e.g. `assert(DT.verify())`) or when `-verify-dom-info` is set.
- Basic -- checks all properties except the sibling property, and compares the current tree with a freshly constructed one instead. This should catch almost all errors, but does not guarantee that the construction algorithm is correct. Used when EXPENSIVE checks are enabled.
- Fast -- checks only basic properties (reachablility, dfs numbers, levels, roots), and compares with a fresh tree. This is meant to replace the legacy `DT.verifyDominatorTree()` and in my tests doesn't cause any noticeable performance impact even in the most pessimistic examples.
When used to verify dom tree wrapper pass analysis on sqlite3, the 3 new levels make `opt -O3` take the following amount of time on my machine:
- no verification: 8.3s
- `DT.verify(VerificationLevel::Fast)`: 10.1s
- `DT.verify(VerificationLevel::Basic)`: 44.8s
- `DT.verify(VerificationLevel::Full)`: 1m 46.2s
(and the previous `DT.verifyDominatorTree()` is within the noise of the Fast level)
This patch makes `DT.verifyDominatorTree()` pick between the 3 verification levels depending on EXPENSIVE_CHECKS and `-verify-dom-info`.
Reviewers: dberlin, brzycki, davide, grosser, dmgreen
Reviewed By: dberlin, brzycki
Subscribers: MatzeB, llvm-commits
Differential Revision: https://reviews.llvm.org/D42337
llvm-svn: 323298
This is already a simplification, and should help with avoiding a plt
reference when calling an intrinsic with -fno-plt.
With this change we return false for null GVs, so the caller only
needs to check the new metadata to decide if it should use foo@plt or
*foo@got.
llvm-svn: 323297
https://reviews.llvm.org/D42402
A lot of these copies are useless (copies b/w VRegs having the same
regclass) and should be cleaned up.
llvm-svn: 323291
Remove FeatureSlowMisaligned128Store from cyclone flags.
This flag causes splitting of 16 byte wide stores into 2 stored of 8
bytes. This was useful on older apple CPUs which were slow for 16byte
stores that were not aligned on 16byte. As the compiler often cannot
predict the actual alignment, the splitting was choosen.
This has been a topic for a lot of debate as the splitting also
decreases performance for some benchmarks. Measuring the effects on
newer apple chips (rdar://35525421) shows that it harms more cases than
it helps. So it is time to retire this workaround.
llvm-svn: 323289