and tremendously less reliant on the optimizer to fix things.
The code is always necessarily looking for the entire length of the
string when doing the equality tests in this find implementation, but it
previously was needlessly re-checking the size each time among other
annoyances.
By writing this so simply an ddirectly in terms of memcmp, it also is
about 8x faster in a debug build, which in turn makes FileCheck about 2x
faster in 'ninja check-llvm'. This saves about 8% of the time for
FileCheck-heavy parts of the test suite like the x86 backend tests.
llvm-svn: 247269
Summary:
The BUILD_VECTOR node will truncate its operators to match the
type. We need to take this into account when constant folding -
we need to perform a truncation before constant folding the elements.
This is because the upper bits can change the result, depending on
the operation type (for example this is the case for min/max).
This change also adds a regression test.
Reviewers: jmolloy
Subscribers: jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D12697
llvm-svn: 247265
GlobalsAA must by definition be preserved in function passes, but the passmanager doesn't know that. Make each pass explicitly preserve GlobalsAA.
llvm-svn: 247263
SmallVector to further help debug builds not waste their time calling
one line functions.
To give you an idea of why this is worthwhile, this change alone gets
another >10% reduction in the runtime of TripleTest.Normalization! It's
now under 9 seconds for me. Sadly, this is the end of the easy wins for
that test. Anything further will require some different architecture of
the test itself. Still, I'm pretty happy. 'check-llvm' now is under 35s
for me.
llvm-svn: 247259
These are now quite heavily used in unit tests and the host tools,
making it worth having them be reasonably fast even in an unoptimized
build. This change reduces the total runtime of TripleTest.Normalization
by yet another 10% to 15%. It is now under 10 seconds on my machine, and
the total check-llvm time has dropped from 38s to around 36s.
I experimented with a number of different options, and the code pattern
here consistently seemed to lower the cleanest, likely due to the
significantly simple CFG and far fewer redundant tests of 'Result'.
llvm-svn: 247257
The tests in isVTRNMask and isVTRN_v_undef_Mask should also check that the elements of the upper and lower half of the vectorshuffle occur in the correct order when both halves are used. Without this test the code assumes that it is correct to use vector transpose (vtrn) for the masks <1, 1, 0, 0> and <1, 3, 0, 2>, among others, but the transpose actually incorrectly generates shuffles for <0, 0, 1, 1> and <0, 2, 1, 3> in this case.
Patch by Jeroen Ketema!
llvm-svn: 247254
The logic of this follows something Howard does in libc++ and something
I discussed with Chris eons ago -- for a lot of functions, there is
really no benefit to preserving "debug information" by leaving the
out-of-line even in debug builds. This is especially true as we now do
a very good job of preserving most debug information even in the face of
inlining. There are a bunch of methods in StringRef that we are paying
a completely unacceptable amount for with every debug build of every
LLVM developer.
Some day, we should fix Clang/LLVM so that developers can reasonable
use a default of something other than '-O0' and not waste their lives
waiting on *completely* unoptimized code to execute. We should have
a default that doesn't impede debugging while providing at least
plausable performance.
But today is not that day.
So today, I'm applying always_inline to the functions that are really
hurting the critical path for stuff like 'check_llvm'. I'm being very
cautious here, but there are a few other APIs that we really should do
this for as a matter of pragmatism. Hopefully we can rip this out some
day.
With this change, TripleTest.Normalization runtime decreases by over
10%, and the total 'check-llvm' time on my 48-core box goes from 38s to
just under 37s.
llvm-svn: 247253
'inline' specifier. That specifier may or may not be valid for a given
function, or it may be required for correct linkage even when the
compiler doesn't support the always_inline attribute.
llvm-svn: 247252
re-using the resulting components rather than repeatedly splitting and
re-splitting to compute each component as part of the initializer list.
This is more work on PR23676. Sadly, it doesn't help much. It removes
the constructor from my profile, but doesn't make a sufficient dent in
the total time. But it should play together nicely with subsequent
changes.
llvm-svn: 247250
with the StringRef::split method when used with a MaxSplit argument
other than '-1' (which nobody really does today, but which should
actually work).
The spec claimed both to split up to MaxSplit times, but also to append
<= MaxSplit strings to the vector. One of these doesn't make sense.
Given the name "MaxSplit", let's go with it being a max over how many
*splits* occur, which means the max on how many strings get appended is
MaxSplit+1. I'm not actually sure the implementation correctly provided
this logic either, as it used a really opaque loop structure.
The implementation was also playing weird games with nullptr in the data
field to try to rely on a totally opaque hidden property of the split
method that returns a pair. Nasty IMO.
Replace all of this with what is (IMO) simpler code that doesn't use the
pair returning split method, and instead just finds each separator and
appends directly. I think this is a lot easier to read, and it most
definitely matches the spec. Added some tests that exercise the corner
cases around StringRef() and StringRef("") that all now pass.
I'll start using this in code in the next commit.
llvm-svn: 247249
on StringRef. Finding and splitting on a single character is
substantially faster than doing it on even a single character StringRef
-- we immediately get to a *very* tuned memchr call this way.
Even nicer, we get to this even in a debug build, shaving 18% off the
runtime of TripleTest.Normalization, helping PR23676 some more.
llvm-svn: 247244
CMake.
The Go bindings tests in an unoptimized build take over 30 seconds for
me, making it the slowest test in 'check-llvm' by a factor of two.
I've only rigged this up fully to the CMake build. If someone is
interested in rigging it up to the autoconf build, they're welcome to do
so.
llvm-svn: 247243
Summary:
PR24757 was caused by some incorect math in
`ScalarEvolution::HowFarToZero` -- the smallest unsigned solution for X
in
2^N * A = 2^N * X
is not necessarily A.
Reviewers: atrick, majnemer, meheff
Subscribers: llvm-commits, sanjoy
Differential Revision: http://reviews.llvm.org/D12721
llvm-svn: 247242
don't correctly implement the scoping rules of C++11 range based for
loops. This kind of aliasing isn't a good idea anyways (and wasn't
really intended).
llvm-svn: 247241
manager to avoid a slow linear scan of every immutable pass and on every
attempt to find an analysis pass.
This speeds up 'check-llvm' on an unoptimized build for me by 15%, YMMV.
It should also help (a tiny bit) other folks that are really
bottlenecked on repeated runs of tiny pass pipelines across small IR
files.
llvm-svn: 247240
The changes in this patch are as follows:
1. Modify the emitPrologue and emitEpilogue methods to work properly when the prologue and epilogue blocks are not the first/last blocks in the function
2. Fix a bug in PPCEarlyReturn optimization caused by an empty entry block in the function
3. Override the runShrinkWrap PredicateFtor (defined in TargetMachine) to check whether shrink wrapping should run:
Shrink wrapping will run on PPC64 (Little Endian and Big Endian) unless -enable-shrink-wrap=false is specified on command line
A new test case, ppc-shrink-wrapping.ll was created based on the existing shrink wrapping tests for x86, arm, and arm64.
Phabricator review: http://reviews.llvm.org/D11817
llvm-svn: 247237
First, we need to teach isFrameOffsetLegal about STNP.
It already knew about the STP/LDP variants, but those were probably
never exercised, because it's only the load/store optimizer that
generates STP/LDP, and the only user of the method is frame lowering,
which runs earlier.
The STP/LDP cases were wrong: they didn't take into account the fact
that they return two results, not one, so the immediate offset will be
the 4th operand, not the 3rd.
Follow-up to r247234.
llvm-svn: 247236
This sort-of deprecates macho-dump. It may take still a little while
to garbage collect it, but at least there's no real usage of it in
the tree anymore. New tests should always rely on llvm-readobj or
llvm-objdump.
llvm-svn: 247235
We could go through the load/store optimizer and match STNP where
we would have matched a nontemporal-annotated STP, but that's not
reliable enough, as an opportunistic optimization.
Insetad, we can guarantee emitting STNP, by matching them at ISel.
Since there are no single-input nontemporal stores, we have to
resort to some high-bits-extracting trickery to generate an STNP
from a plain store.
Also, we need to support another, LDP/STP-specific addressing mode,
base + signed scaled 7-bit immediate offset.
For now, only match the base. Let's make it smart separately.
Part of PR24086.
llvm-svn: 247231
The assertion was weaker than it should be and gave the impression we're growing the number of base defining values being considered during the fixed point interation. That's not true. The tighter form of the assert is useful documentation.
llvm-svn: 247221
All of the complexity is in cleanupret, and it mostly follows the same
codepaths as catchret, except it doesn't take a return value in RAX.
This small example now compiles and executes successfully on win32:
extern "C" int printf(const char *, ...) noexcept;
struct Dtor {
~Dtor() { printf("~Dtor\n"); }
};
void has_cleanup() {
Dtor o;
throw 42;
}
int main() {
try {
has_cleanup();
} catch (int) {
printf("caught it\n");
}
}
Don't try to put the cleanup in the same function as the catch, or Bad
Things will happen.
llvm-svn: 247219
This reapply commit r247178 after post-commit review from D.Blaikie
in a way that makes it compatible with the existing API.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 247215
The purpose is to allow templated wrapper to work with either
ArrayRef or any convertible operation:
template<typename Container>
void wrapper(const Container &Arr) {
impl(makeArrayRef(Arr));
}
with Container being a std::vector, a SmallVector, or an ArrayRef.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 247214
This change is simply enhancing the existing inference algorithm to handle insertelement instructions by conservatively inserting a new instruction to propagate the vector of associated base pointers. In the process, I'm ripping out the peephole optimizations which mostly helped cover the fact this hadn't been done.
Note that most of the newly inserted nodes will be nearly immediately removed by the post insertion optimization pass introduced in 246718. Arguably, we should be trying harder to avoid the malloc traffic here, but I'd rather get the code correct, then worry about compile time.
Unlike previous extensions of the algorithm to handle more case, I discovered the existing code was causing miscompiles in some cases. In particular, we had an implicit assumption that the peephole covered *all* insert element instructions, so if we had a value directly based on a insert element the peephole didn't cover, we proceeded as if it were a base anyways. Not good. I believe we had the same issue with shufflevector which is why I adjusted the predicate for them as well.
Differential Revision: http://reviews.llvm.org/D12583
llvm-svn: 247210
Previously, the base pointer algorithm wasn't deterministic. The core fixed point was (of course), but we were inserting new nodes and optimizing them in an order which was unspecified and variable. We'd somewhat hacked around this for testing by sorting by value name, but that doesn't solve the general determinism problem.
Instead, we can use the order of traversal over the def/use graph to give us a single consistent ordering. Today, this is a DFS order, but the exact order doesn't mater provided it's deterministic for a given input.
(Q: It is safe to rely on a deterministic order of operands right?)
Note that this only fixes the determinism within a single inference step. The inference step is currently invoked many times in a non-deterministic order. That's a future change in the sequence. :)
Differential Revision: http://reviews.llvm.org/D12640
llvm-svn: 247208
Visit disjoint sets in a deterministic order based on the maximum BitSetNM
index, otherwise the order in which we visit them will depend on pointer
comparisons. This was being exposed by MSan.
llvm-svn: 247201
The 32-bit tables don't actually contain PC range data, so emitting them
is incredibly simple.
The 64-bit tables, on the other hand, use the same table for state
numbering as well as label ranges. This makes things more difficult, so
it will be implemented later.
llvm-svn: 247192
This change enables EmitRecord to pass the supplied record Code to
EmitRecordWithAbbrevImpl, rather than insert it into the Vals array.
It is an enabler for changing EmitRecord to take an ArrayRef<uintty> instead
of a SmallVectorImpl<uintty>&
Patch suggested by Duncan P. N. Exon Smith, modified by myself a bit to get
correct assertion checking.
llvm-svn: 247186
With subregister liveness enabled we can detect the case where only
parts of a register are live in, this is expressed as a 32bit lanemask.
The current code only keeps registers in the live-in list and therefore
enumerated all subregisters affected by the lanemask. This turned out to
be too conservative as the subregister may also cover additional parts
of the lanemask which are not live. Expressing a given lanemask by
enumerating a minimum set of subregisters is computationally expensive
so the best solution is to simply change the live-in list to store the
lanemasks as well. This will reduce memory usage for targets using
subregister liveness and slightly increase it for other targets
Differential Revision: http://reviews.llvm.org/D12442
llvm-svn: 247171
Now that we have an explicit iterator over the idx2MBBMap in SlotIndices
we can use the fact that segments and the idx2MBBMap is sorted by
SlotIndex position so can advance both simultaneously instead of
starting from the beginning for each segment.
This complicates the code for the subregister case somewhat but should
be more efficient and has the advantage that we get the final lanemask
for each block immediately which will be important for a subsequent
change.
Removes the now unused SlotIndexes::findMBBLiveIns function.
Differential Revision: http://reviews.llvm.org/D12443
llvm-svn: 247170
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
This introduces a check that the MBBIndexList is sorted as proposed in
http://reviews.llvm.org/D12443 but split up into a separate commit.
llvm-svn: 247166
Instead of extracting both 32-bit components from the 128-bit
register. This produces fewer copies and is easier for
the copy peephole optimizer to understand and see the actual uses
as extracts from a reg_sequence.
This avoids needing to handle subregister composing in the
PeepholeOptimizer's ValueTracker for this case.
llvm-svn: 247162
Summary:
This can be used for distinguishing between cmake and autoconf builds.
Users may need this in order to handle inconsistencies between the
outputs of the two build systems.
Reviewers: echristo, chandlerc, beanz
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11838
llvm-svn: 247159
Summary:
This helps mostly when we use add instructions for address calculations
that contain immediates.
Reviewers: arsenm
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D12256
llvm-svn: 247157
Summary:
We are not scalarizing the wide selects in codegen for i16 and i32 and
therefore we can remove the amortization factor. We still have issues
with i64 vectors in codegen though.
Reviewers: mcrosier
Subscribers: mcrosier, aemerson, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D12724
llvm-svn: 247156
Summary:
Cross-compilation uses recursive cmake invocations to build native host
tools. These recursive invocations only forward a fixed set of
variables/options, since the native environment is generally the default.
This change adds -DLLVM_TARGET_IS_CROSSCOMPILE_HOST=TRUE to the recursive
cmake invocations, so that cmake files can distinguish these recursive
invocations from top-level ones, which can explain why expected options
are unset.
LLILC will use this to avoid trying to generate its build rules in the
crosscompile native host target (where it is not needed), which would fail
if attempted because LLILC requires a cmake variable passed on the command
line, which is not forwarded in the recursive invocation.
Reviewers: rnk, beanz
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D12679
llvm-svn: 247151
Predicating stores requires creating extra blocks. It's much cleaner if we do this in one pass instead of mutating the CFG while writing vector instructions.
Besides which we can make use of helper functions to update domtree for us, reducing the work we need to do.
llvm-svn: 247139
Removed "cortex-r5f" and "cortex-m4f" from Target Parser, sinced they are
unknown cpu names for llvm and clang. Also updated default FPUs for R5 and M4
accordingly.
Differential Revision: http://reviews.llvm.org/D12692
Change-Id: Ib81c7216521a361d8ee1296e4b6a2aa00bd479c5
llvm-svn: 247136
Summary:
One of the vector splitting paths for extract_vector_elt tries to lower:
define i1 @via_stack_bug(i8 signext %idx) {
%1 = extractelement <2 x i1> <i1 false, i1 true>, i8 %idx
ret i1 %1
}
to:
define i1 @via_stack_bug(i8 signext %idx) {
%base = alloca <2 x i1>
store <2 x i1> <i1 false, i1 true>, <2 x i1>* %base
%2 = getelementptr <2 x i1>, <2 x i1>* %base, i32 %idx
%3 = load i1, i1* %2
ret i1 %3
}
However, the elements of <2 x i1> are not byte-addressible. The result of this
is that the getelementptr expands to '%base + %idx * (1 / 8)' which simplifies
to '%base + %idx * 0', and then simply '%base' causing all values of %idx to
extract element zero.
This commit fixes this by promoting the vector elements of <8-bits to i8 before
splitting the vector.
This fixes a number of test failures in pocl.
Reviewers: pekka.jaaskelainen
Subscribers: pekka.jaaskelainen, llvm-commits
Differential Revision: http://reviews.llvm.org/D12591
llvm-svn: 247128
Currently this hits an assert that extload should
always be supported, which assumes integer extloads.
This moves a hack out of SI's argument lowering and
is covered by existing tests.
llvm-svn: 247113
Change `EmitRecordWithAbbrev()` and friends to take an `ArrayRef<T>`
instead of requiring a `SmallVectorImpl<T>`. No functionality change
intended.
llvm-svn: 247107
Example output:
Linker Options {
Size: 32
Count: 2
Strings [
Value: -framework
Value: Cocoa
]
}
There were only two tests using this -- so I converted them as part of
this commit rather than separately.
Differential Revision: http://reviews.llvm.org/D12702
llvm-svn: 247106
Typically these are catchpads, which hold data used to decide whether to
catch the exception or continue unwinding. We also shouldn't create MBBs
for catchendpads, cleanupendpads, or terminatepads, since no real code
can live in them.
This fixes a problem where MI passes (like the register allocator) would
try to put code into catchpad blocks, which are not executed by the
runtime. In the new world, blocks ending in invokes now have many
possible successors.
llvm-svn: 247102
Summary:
32-bit funclets have short prologues that allocate enough stack for the
largest call in the whole function. The runtime saves CSRs for the
funclet. It doesn't restore CSRs after we finally transfer control back
to the parent funciton via a CATCHRET, but that's a separate issue.
32-bit funclets also have to adjust the incoming EBP value, which is
what llvm.x86.seh.recoverframe does in the old model.
64-bit funclets need to spill CSRs as normal. For simplicity, this just
spills the same set of CSRs as the parent function, rather than trying
to compute different CSR sets for the parent function and each funclet.
64-bit funclets also allocate enough stack space for the largest
outgoing call frame, like 32-bit.
Reviewers: majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D12546
llvm-svn: 247092
This change extends the bitset lowering pass to support bitsets that may
contain either functions or global variables. A function bitset is lowered to
a jump table that is laid out before one of the functions in the bitset.
Also add support for non-string bitset identifier names. This allows for
distinct metadata nodes to stand in for names with internal linkage,
as done in D11857.
Differential Revision: http://reviews.llvm.org/D11856
llvm-svn: 247080
Summary:
Add a test for a data followed by 4-byte hash value.
I use a slightly modified Jenkins hash function,
as described in https://en.wikipedia.org/wiki/Jenkins_hash_function
The modification is to ensure that hash(zeros) != 0.
Reviewers: kcc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D12648
llvm-svn: 247076
Adds vcc to output string input for e32. Allows option
of using e64 encoding with assembler.
Also fixes these instructions not implicitly reading exec.
llvm-svn: 247074
The pass is needed to remove __nvvm_reflect calls when we link in
libdevice bitcode that comes with CUDA.
Differential Revision: http://reviews.llvm.org/D11663
llvm-svn: 247072
Summary: This patch modifies X86TargetLowering::LowerVASTART so that
struct va_list is initialized with 32 bit pointers in x32. It also
includes tests that call @llvm.va_start() for x32.
Patch by João Porto
Subscribers: llvm-commits, hjl.tools
Differential Revision: http://reviews.llvm.org/D12346
llvm-svn: 247069
This allows backends which don't use a traditional register allocator,
but do need PHI lowering and other passes, to use the default
TargetPassConfig::addFastRegAlloc and
TargetPassConfig::addOptimizedRegAlloc implementations.
Differential Revision: http://reviews.llvm.org/D12691
llvm-svn: 247065
These were marked as WriteSALU, which is low latency.
I'm guessing at the value to use, but it should probably
be considered the highest latency instruction.
I'm not sure this has any actual effect since hasSideEffects
probably is preventing any moving of these.
llvm-svn: 247060
The old implementation assumed LP64 which is broken for x32. Specifically, the
MOVE8rm_NOREX and MOVE8mr_NOREX, when selected, would cause a 'Cannot emit
physreg copy instruction' error message to be reported.
This patch also enable the h-register*ll tests for x32.
Differential Revision: http://reviews.llvm.org/D12336
Patch by João Porto
llvm-svn: 247058
sub C, x - > add (sub 0, x), C for DS offsets.
This is mostly to fix regressions that show up when
SeparateConstOffsetFromGEP is enabled.
llvm-svn: 247054
This corner case happens when we have an irreducible SCC that is
deeply nested. As we work down the tree, the backedge masses start
getting smaller and smaller until we reach one that is down to 0.
Since we distribute the incoming mass using the backedge masses as
weight, the distributor does not allow zero weights. So, we simply
ignore them (which will just use the weights of the non-zero nodes).
llvm-svn: 247050
.align directive refuses alignment 0 -- a comment in the code hints this is
done for GNU as compatibility, but it seems GNU as accepts .align 0
(and silently rounds up alignment to 1).
Differential Revision: http://reviews.llvm.org/D12682
llvm-svn: 247048
- Move tests only exercising instsimplify to instsimplify's apint-or.ll
- Actually test the CHECK lines in instsimplify's apint-or.ll
- Merge the remaining tests in apint-or1.ll and apint-or2.ll, use FileCheck
llvm-svn: 247045
removes cast by performing the lshr on smaller types. However, currently there
is no trunc(lshr (sext A), Cst) variant.
This patch add such optimization by transforming trunc(lshr (sext A), Cst)
to ashr A, Cst.
Differential Revision: http://reviews.llvm.org/D12520
llvm-svn: 246997
Summary: And define them to have noop casts with address spaces 0-255.
Reviewers: pekka.jaaskelainen
Subscribers: pekka.jaaskelainen, llvm-commits
Differential Revision: http://reviews.llvm.org/D12678
llvm-svn: 246990
SelectT2ShifterOperandReg has identical behaviour to SelectImmShifterOperand,
so get rid of it and use SelectImmShifterOperand instead.
Differential Revision: http://reviews.llvm.org/D12195
llvm-svn: 246962
Trivial multiplication by zero may survive the worklist. We tried to
reassociate the multiplication with a division instruction, causing us
to divide by zero; bail out instead.
This fixes PR24726.
llvm-svn: 246939
In searching for a fix for the underlying code-quality bug highlighted by
r246937 (that SDAG simplification can lead to us generating an ISD::OR node
with a constant zero LHS), I ran across this:
We generically canonicalize commutative binary-operation nodes in SDAG getNode
so that, if only one operand is a constant, it will be on the RHS. However, we
were doing this only after a bunch of constant-based simplification checks that
all assume this canonical form (that any constant will be on the RHS). Moving
the operand-swapping canonicalization prior to these checks seems like the
right thing to do (and, as it turns out, causes SDAG to completely fold away the
computation in test/CodeGen/ARM/2012-11-14-subs_carry.ll, just like InstCombine
would do).
llvm-svn: 246938
To commute a trivial rlwimi instructions (meaning one with a full mask and zero
shift), we'd need to ability to form an all-zero mask (instead of an all-one
mask) using rlwimi. We can't represent this, however, and we'll miscompile code
if we try.
The code quality problem that this highlights (that SDAG simplification can
lead to us generating an ISD::OR node with a constant zero LHS) will be fixed
as a follow-up.
Fixes PR24719.
llvm-svn: 246937
Summary:
Update the Go bindings to DIBuilder to match
the split of creating local variables into
auto and parameter variables.
Reviewers: pcc
Subscribers: llvm-commits, axw
Differential Revision: http://reviews.llvm.org/D11864
llvm-svn: 246935
Summary:
This diff attempts to address the concerns raised in
http://reviews.llvm.org/D12488.
We introduce a new USE_SHARED option to llvm_config,
which, if set, causes the target to be linked against
libLLVM.
add_llvm_utility now uniformly disables linking against
libLLVM. These utilities are not intended for distribution,
and this keeps the option handling more centralised.
llvm-shlib is now processes before any other "tools"
subdirectories, ensuring the libLLVM target is defined
before its dependents.
One main difference from what was requested: llvm_config
does not prune LLVM_DYLIB_COMPONENTS from the components
passed into explicit_llvm_config. This is because the "all"
component does something special, adding additional
libraries (namely libLTO). Adding the component libraries
after libLLVM should not be a problem, as symbols will be
resolved in libLLVM first.
Finally, I'm not really happy with the
DISABLE_LLVM_LINK_LLVM option, but I'm not sure of a
better way to get the following:
- link all tools and shared libraries to libLLVM if
LLVM_LINK_LLVM_DYLIB is set
- some way of explicitly *not* doing so for utilities
and libLLVM itself
Suggestions for improvement here are particularly welcome.
Reviewers: beanz
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D12590
llvm-svn: 246918
This commit accomplish two goals:
1) it's a step forward to deprecate macho-dump, now less than 40 tests
rely on it.
2) It tests all the MachO specific features introduced in llvm-readobj in
the following commits: r246789, r246665, r246474.
While the conversion is mostly mechanical (I double-checked all the
tests output one by one, but still), a post-commit review is greatly
appreciated.
llvm-svn: 246904
PPCISelDAGToDAG has a transformation that generates a rlwimi instruction from
an input pattern that looks like this:
and(or(x, c1), c2)
but the associated logic does not work if there are bits that are 1 in c1 but 0
in c2 (these are normally canonicalized away, but that can't happen if the 'or'
has other users. Make sure we abort the transformation if such bits are
discovered.
Fixes PR24704.
llvm-svn: 246900
The variable is actually called ANDROID_SERIAL.
This was not exercised on the bots until today.
Should fix the sanitizer-x86_64-linux failures.
llvm-svn: 246898
Casting to unsigned long can cause the time to get truncated to 32-bits,
making it appear to be a valid timestamp. Just use isUInt<32> instead.
llvm-svn: 246840
This adds a basic cost model for interleaved-access vectorization (and a better
default for shuffles), and enables interleaved-access vectorization by default.
The relevant difference from the default cost model for interleaved-access
vectorization, is that on PPC, the shuffles that end up being used are *much*
cheaper than modeling the process with insert/extract pairs (which are
quite expensive, especially on older cores).
llvm-svn: 246824
On the A2, with an eye toward QPX unaligned-load merging, we should always use
aggressive interleaving. It is generally superior to only using concatenation
unrolling.
llvm-svn: 246819
When forming permutation-based unaligned vector loads, we need to know whether
it is valid to read ahead of the requested address by a full vector length.
Doing so is more efficient (and allows for more CSE with later loads), but
could trigger a page fault if invalid. To determine validity, we look for other
loads in the same block that access the relevant address range.
The relevant point here is that we need to do this as part of the process of
forming permutation-based vector loads, and this happens quite early in the
SDAG pipeline - specifically before many of the address calculations are fully
canonicalized. As a result, we need to try harder to recognize base+offset
address computations, because they still might appear as chain of adds
(base+offset+offset, for example). To account for this, we'll look through
chains of adds, accumulating the constant offsets.
llvm-svn: 246813
As a first step towards a new implementation of the base pointer inference algorithm, introduce an abstraction for BDVs, strengthen the assertions around them, and rewrite the BDV relation code in terms of the abstraction which includes an explicit notion of whether the BDV is also a base. The later is motivated by the fact we had a bug where insertelement was always assumed to be a base pointer even though the BDV code knew it wasn't. The strengthened assertions in this patch would have caught that bug.
The next step will be to separate the DefiningValueMap into a BDV use list cache (entirely within findBasePointers) and a base pointer cache. Having the former will allow me to use a deterministic visit order when visiting BDVs in the inference algorithm and remove a bunch of ordering related hacks. Before actually doing the last step, I'm likely going to extend the lattice with a 'BaseN' (seen only base inputs) state so that I can kill the post process optimization step.
Phabricator Revision: http://reviews.llvm.org/D12608
llvm-svn: 246809
Pre-P8, when we generate code for unaligned vector loads (for Altivec and QPX
types), even when accounting for the combining that takes place for multiple
consecutive such loads, there is at least one load instructions and one
permutation for each load. Make sure the cost reported reflects the cost of the
permutes as well.
llvm-svn: 246807
If you compute the MMO offset using unsigned arithmetic, you end up with a
large positive offset instead of a small negative one. In theory, this could
cause bad instruction-scheduling decisions later.
I noticed this by inspection from the debug output, and using that for the
regression test is the best I can do right now.
llvm-svn: 246805
The visit order being used in the base pointer inference algorithm is currently non-deterministic. When working on http://reviews.llvm.org/D12583, I discovered that we were relying on a peephole optimization to get deterministic ordering in one of the test cases.
This change is intented to let me test and land http://reviews.llvm.org/D12583. The current code will not be long lived. I'm starting to investigate a rewrite of the algorithm which will combine the post-process step into the initial algorithm and make the visit order determistic. Before doing that, I wanted to make sure the existing code was complete and the test were stable. Hopefully, patches should be up for review for the new algorithm this week or early next.
llvm-svn: 246801
* ``the value cannot fit within the destination type`` is ambiguous.
It could mean overflow, underflow (not in the IEEE-754 sense) or a
result that cannot be exactly represented and requires rounding or it
could mean some combination of these. The semantics now state it means
overflow **only**.
* Using "truncation" in the semantics is very misleading given that it
doesn't necessarily truncate (i.e. round to zero). For example on
x86_64 with SSE2 this is currently mapped to cvtsd2ss instruction
who's rounding behaviour is dependent on the MXCSR register which
is usually set to round to nearest even by default. The semantics
now state that the rounding mode is undefined.
llvm-svn: 246792
Summary:
This fixes bugzilla bug 24656. Fixes the case where there is a forward
reference to a global variable using an ID (i.e. @0). It does this by
passing the address space of the initializer pointer for which the
forward referenced global is used.
llvm-svn: 246788