Bug-Point functionality needs extending due to the patch D29185 by bd1976llvm (Allow llvm's build and test systems to support paths with spaces ). It requires Bugpoint to accept the use of spaces within ‘--compile-command’ tokens.
Details
Bugpoint uses the argument ‘--compile-command’ to pass in a command line argument as a string, the string is tokenized by the ‘lexCommand’ function using spaces as a delimiter. Patch D29185 will cause the unit test compile-custom.ll to fail as spaces are now required within tokens and as a delimiter. This patch allows the use of escape characters as below:
Two consecutive '\' evaluate to a single '\'.
A space after a '\' evaluates to a space that is not interpreted as a delimiter.
Any other instances of the '\' character are removed.
Committed on behalf of Owen Reynolds
Differential revision: https://reviews.llvm.org/D29940
llvm-svn: 296763
This re-applies r289696, which caused TSan perf regression, which has
since been addressed in separate changes (see PR for details).
See PR31382.
llvm-svn: 296759
The CallingConv.td rules allocate 8 bytes for these kinds of arguments
on AAPCS targets, but we were only recording the smaller amount. The
difference is theoretical on AArch64 because we don't actually store
more than the smaller amount, but it's still much better to have these
two components in agreement.
Based on Diana Picus's ARM equivalent patch (where it matters a lot
more).
llvm-svn: 296754
Summary:
When InstCombine is optimizing certain select-cmp-br patterns
it replaces the result of the select in uses outside of the
basic block containing the select. This is only legal if the
path from the select to the outside use is disjoint from all
other paths out from the originating basic block.
The problem found was that InstCombiner::replacedSelectWithOperand
did not consider the case when both edges out from the br pointed
to the same label. In that case the paths aren't disjoint and the
transformation is illegal. This patch avoids the faulty rewrites
by verifying that there is a single flow to the successor where
we want to replace uses.
Reviewers: llvm-commits, spatel, majnemer
Differential Revision: https://reviews.llvm.org/D30455
llvm-svn: 296752
After r296750, we're able to match interleaved accesses having types wider than
128 bits. This patch updates the associated TTI costs.
Differential Revision: https://reviews.llvm.org/D29675
llvm-svn: 296751
This patch teaches (ARM|AArch64)ISelLowering.cpp to match illegal vector types
to interleaved access intrinsics as long as the types are multiples of the
vector register width. A "wide" access will now be mapped to multiple
interleave intrinsics similar to the way in which non-interleaved accesses with
illegal types are legalized into multiple accesses. I'll update the associated
TTI costs (in getInterleavedMemoryOpCost) as a follow-on.
Differential Revision: https://reviews.llvm.org/D29466
llvm-svn: 296750
When computing the smallest and largest types for selecting the maximum
vectorization factor, we currently ignore loads and stores of pointer types if
the memory access is non-consecutive. We do this because such accesses must be
scalarized regardless of vectorization factor, and thus shouldn't be considered
when determining the factor. This patch makes this check less aggressive by
also considering non-consecutive accesses that may be vectorized, such as
interleaved accesses. Because we don't know at the time of the check if an
accesses will certainly be vectorized (this is a cost model decision given a
particular VF), we consider all accesses that can potentially be vectorized.
Differential Revision: https://reviews.llvm.org/D30305
llvm-svn: 296747
If dominator tree is not calculated or is invalidated, set corresponding
pointer in the pass state to nullptr. Such pointer value will indicate
that operations with dominator tree are not allowed. In particular, it
allows to skip verification for such pass state. The dominator tree is
not calculated if the machine dominator pass was skipped, it occures in
the case of entities with linkage available_externally.
The change fixes some test fails observed when expensive checks
are enabled.
Differential Revision: https://reviews.llvm.org/D29280
llvm-svn: 296742
Surprisingly, one of the three interference checks in LiveRegMatrix was
using the main live range instead of the apropriate subregister range
resulting in unnecessarily conservative results.
llvm-svn: 296722
Original commit message:
[ARM] Fix insert point for store rescheduling.
In ARMPreAllocLoadStoreOpt::RescheduleOps, LastOp should be the last
operation which we want to merge. If we break out of the loop because
an operation has the wrong offset, we shouldn't use that operation as
LastOp.
This patch fixes some cases where we would sink stores for no reason.
llvm-svn: 296718
Summary:
This can be used to optimize large multiplications after legalization.
Depends on D29565
Reviewers: mkuper, spatel, RKSimon, zvi, bkramer, aaboud, craig.topper
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29587
llvm-svn: 296711
Until now, we've had to use -global-isel to enable GISel. But using
that on other targets that don't support it will result in an abort, as we
can't build a full pipeline.
Additionally, we want to experiment with enabling GISel by default for
some targets: we can't just enable GISel by default, even among those
target that do have some support, because the level of support varies.
This first step adds an override for the target to explicitly define its
level of support. For AArch64, do that using
a new command-line option (I know..):
-aarch64-enable-global-isel-at-O=<N>
Where N is the opt-level below which GISel should be used.
Default that to -1, so that we still don't enable GISel anywhere.
We're not there yet!
While there, remove a couple LLVM_UNLIKELYs. Building the pipeline is
such a cold path that in practice that shouldn't matter at all.
llvm-svn: 296710
In ARMPreAllocLoadStoreOpt::RescheduleOps, LastOp should be the last
operation which we want to merge. If we break out of the loop because
an operation has the wrong offset, we shouldn't use that operation as
LastOp.
This patch fixes some cases where we would sink stores for no reason.
Differential Revision: https://reviews.llvm.org/D30124
llvm-svn: 296708
This code starts from the high end of the sorted vector of offsets, and
works backwards: it tries to find contiguous offsets, process them, then
pops them from the end of the vector. Most of the code agrees with this
order of processing, but one loop doesn't: it instead processes elements
from the low end of the vector (which are nodes with unrelated offsets).
Fix that loop to process the correct elements.
This has a few implications. One, we don't incorrectly return early when
processing multiple groups of offsets in the same block (which allows
rescheduling prera-ldst-insertpt.mir). Two, we pick the correct insert
point for loads, so they're correctly sorted (which affects the
scheduling of vldm-liveness.ll). I think it might also impact some of
the heuristics slightly.
Differential Revision: https://reviews.llvm.org/D30368
llvm-svn: 296701
This is part of the ongoing attempt to improve select codegen for all targets and select
canonicalization in IR (see D24480 for more background). The transform is a subset of what
is done in InstCombine's FoldOpIntoSelect().
I first noticed a regression in the x86 avx512-insert-extract.ll tests with a patch that
hopes to convert more selects to basic math ops. This appears to be a general missing DAG
transform though, so I added tests for all standard binops in rL296621
(PowerPC was chosen semi-randomly; it has scripted FileCheck support, but so do ARM and x86).
The poor output for "sel_constants_shl_constant" is tracked with:
https://bugs.llvm.org/show_bug.cgi?id=32105
Differential Revision: https://reviews.llvm.org/D30502
llvm-svn: 296699
Now that terminators can be EH pads, this code needs to iterate over the
immediate dominators of the EH pad to find a valid insertion point.
Fix for PR32107
Patch by Robert Olliff!
Differential Revision: https://reviews.llvm.org/D30511
llvm-svn: 296698
Take DW_FORM_implicit_const attribute value into account when profiling
DIEAbbrevData.
Currently if we have two similar types with implicit_const attributes and
different values we end up with only one abbrev in .debug_abbrev section.
For example consider two structures: S1 with implicit_const attribute ATTR
and value VAL1 and S2 with implicit_const ATTR and value VAL2.
The .debug_abbrev section will contain only 1 related record:
[N] DW_TAG_structure_type DW_CHILDREN_yes
DW_AT_ATTR DW_FORM_implicit_const VAL1
// ....
This is incorrect as struct S2 (with VAL2) will use abbrev record with VAL1.
With this patch we will have two different abbreviations here:
[N] DW_TAG_structure_type DW_CHILDREN_yes
DW_AT_ATTR DW_FORM_implicit_const VAL1
// ....
[M] DW_TAG_structure_type DW_CHILDREN_yes
DW_AT_ATTR DW_FORM_implicit_const VAL2
// ....
llvm-svn: 296691
- We only need the information from the base class, not the additional
details in the LiveInterval class.
- Spread more `const`
- Some code cleanup
llvm-svn: 296684
Summary:
Avoids tons of prologue boilerplate when arguments are passed in memory
and left in memory. This can happen in a debug build or in a release
build when an argument alloca is escaped. This will dramatically affect
the code size of x86 debug builds, because X86 fast isel doesn't handle
arguments passed in memory at all. It only handles the x86_64 case of up
to 6 basic register parameters.
This is implemented by analyzing the entry block before ISel to identify
copy elision candidates. A copy elision candidate is an argument that is
used to fully initialize an alloca before any other possibly escaping
uses of that alloca. If an argument is a copy elision candidate, we set
a flag on the InputArg. If the the target generates loads from a fixed
stack object that matches the size and alignment requirements of the
alloca, the SelectionDAG builder will delete the stack object created
for the alloca and replace it with the fixed stack object. The load is
left behind to satisfy any remaining uses of the argument value. The
store is now dead and is therefore elided. The fixed stack object is
also marked as mutable, as it may now be modified by the user, and it
would be invalid to rematerialize the initial load from it.
Supersedes D28388
Fixes PR26328
Reviewers: chandlerc, MatzeB, qcolombet, inglorion, hans
Subscribers: igorb, llvm-commits
Differential Revision: https://reviews.llvm.org/D29668
llvm-svn: 296683
I am planning to use this tool to find too noisy (missed) optimization
remarks. Long term it may actually be better to just have another tool that
exports the remarks into an sqlite database and perform queries like this in
SQL.
This splits out the YAML parsing from opt-viewer.py into a new Python module
optrecord.py.
This is the result of the script on the LLVM testsuite:
Total number of remarks 714433
Top 10 remarks by pass:
inline 52%
gvn 24%
licm 13%
loop-vectorize 5%
asm-printer 3%
loop-unroll 1%
regalloc 1%
inline-cost 0%
slp-vectorizer 0%
loop-delete 0%
Top 10 remarks:
gvn/LoadClobbered 20%
inline/Inlined 19%
inline/CanBeInlined 18%
inline/NoDefinition 9%
licm/LoadWithLoopInvariantAddressInvalidated 6%
licm/Hoisted 6%
asm-printer/InstructionCount 3%
inline/TooCostly 3%
gvn/LoadElim 3%
loop-vectorize/MissedDetails 2%
Beside some refactoring, I also changed optrecords not to use context to
access global data (max_hotness). Because of the separate module this would
have required splitting context into two. However it's not possible to access
the optrecord context from the SourceFileRenderer when calling back to
Remark.RelativeHotness.
llvm-svn: 296682
Summary:
This patch moves the clearUnusedBits calls into the two different initialization paths for APInt from a uint64_t. This allows the compiler to better optimize the clearing of the unused bits for the single word case. And it puts the clearing for the multi word case into the initSlowCase function to save code. In the common case of initializing with 0 this allows the clearing to be completely optimized out for the single word case.
On my local x86 build this is showing a ~45kb reduction in the size of the opt binary.
Reviewers: RKSimon, hans, majnemer, davide, MatzeB
Reviewed By: hans
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D30486
llvm-svn: 296677
This patch adds a MachineSSA pass that coalesces blocks that branch
on the same condition.
Committing on behalf of Lei Huang.
Differential Revision: https://reviews.llvm.org/D28249
llvm-svn: 296670
Add check that deleted nodes do not get added to worklist. This can
occur when a node's operand is simplified to an existing node.
This fixes PR32108.
Reviewers: jyknight, hfinkel, chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D30506
llvm-svn: 296668
This was due to the test stream choosing an arbitrary partition
index for introducing the discontinuity rather than choosing
an index that would be correctly aligned for the type of data.
Also added an assertion into FixedStreamArray so that this will
be caught on all bots in the future, and not just the UBSan bot.
llvm-svn: 296661