properly account for the *global* probability of the edge being taken.
This manifested as a very large number of unconditional branches to
blocks being merged against the CFG even though they weren't
particularly hot within the CFG.
The fix is to check whether the edge being merged is both locally hot
relative to other successors for the source block, and globally hot
compared to other (unmerged) predecessors of the destination block.
This introduces a new crasher on GCC single-source, but it's currently
behind a flag, and Ben has offered to work on the reduction. =]
llvm-svn: 145010
is actually being tested. Also add some FileCheck goodness to much more
carefully ensure that the result is the desired result. Before this test
would only have failed through an assert failure if the underlying fix
were reverted.
Also, add some weight metadata and a comment explaining exactly what is
going on to a trick section of the test case. Originally, we were
getting very unlucky and trying to form a block chain that isn't
actually profitable. I'm working on a fix to avoid forming these
unprofitable chains, and that would also have masked any failure from
this test case. The easy solution is to add some metadata that makes it
*really* profitable to form the bad chain here.
llvm-svn: 145006
formation phase and into the initial walk of the basic blocks. We
essentially pre-merge all blocks where unanalyzable fallthrough exists,
as we won't be able to update the terminators effectively after any
reorderings. This is quite a bit more principled as there may be CFGs
where the second half of the unanalyzable pair has some analyzable
predecessor that gets placed first. Then it may get placed next,
implicitly breaking the unanalyzable branch even though we never even
looked at the part that isn't analyzable. I've included a test case that
triggers this (thanks Benjamin yet again!), and I'm hoping to synthesize
some more general ones as I dig into related issues.
Also, to make this new scheme work we have to be able to handle branches
into the middle of a chain, so add this check. We always fallback on the
incoming ordering.
Finally, this starts to really underscore a known limitation of the
current implementation -- we don't consider broken predecessors when
merging successors. This can caused major missed opportunities, and is
something I'm planning on looking at next (modulo more bug reports).
llvm-svn: 144994
The loop tree's inclusive block lists are painful and expensive to
update. (I have no idea why they're inclusive). The design was
supposed to handle this case but the implementation missed it and my
unit tests weren't thorough enough.
Fixes PR11335: loop unroll update.
llvm-svn: 144970
The right way to check for a binary operation is
cast<BinaryOperator>. The original check: cast<Instruction> &&
numOperands() == 2 would match phi "instructions", leading to an
infinite loop in extreme corner case: a useless phi with operands
[self, constant] that prior optimization passes failed to remove,
being used in the loop by another useless phi, in turn being used by an
lshr or udiv.
Fixes PR11350: runaway iteration assertion.
llvm-svn: 144935
ADDs. MaxOffs is used as a threshold to limit the size of the offset. Tradeoffs
being: (1) If we can't materialize the large constant then we'll cause fast-isel
to bail. (2) Too large of an offset can't be directly encoded in the ADD
resulting in a MOV+ADD. Generally not a bad thing because otherwise we would
have had ADD+ADD, but on Thumb this turns into a MOVS+MOVT+ADD. Working on a fix
for that. (3) Conversely, too low of a threshold we'll miss opportunities to
coalesce ADDs.
rdar://10412592
llvm-svn: 144886
We don't (yet) have the granularity in the fixups to be specific about which
bitranges are affected. That's a future cleanup, but we're not there yet.
llvm-svn: 144852
For example,
vld1.f64 {d2-d5}, [r2,:128]!
Should be equivalent to:
vld1.f64 {d2,d3,d4,d5}, [r2,:128]!
It's not documented syntax in the ARM ARM, but it is consistent with what's
accepted for VLDM/VSTM and is unambiguous in meaning, so it's a good thing to
support.
rdar://10451128
llvm-svn: 144727
When the 3rd operand is not a low-register, and the first two operands are
the same low register, the parser was incorrectly trying to use the 16-bit
instruction encoding.
rdar://10449281
llvm-svn: 144679
has a reference to it. Unfortunately, that doesn't work for codegen passes
since we don't get notified of MBB's being deleted (the original BB stays).
Use that fact to our advantage and after printing a function, check if
any of the IL BBs corresponds to a symbol that was not printed. This fixes
pr11202.
llvm-svn: 144674
block sequence when recovering from unanalyzable control flow
constructs, *always* use the function sequence. I'm not sure why I ever
went down the path of trying to use the loop sequence, it is
fundamentally not the correct sequence to use. We're trying to preserve
the incoming layout in the cases of unreasonable control flow, and that
is only encoded at the function level. We already have a filter to
select *exactly* the sub-set of blocks within the function that we're
trying to form into a chain.
The resulting code layout is also significantly better because of this.
In several places we were ending up with completely unreasonable control
flow constructs due to the ordering chosen by the loop structure for its
internal storage. This change removes a completely wasteful vector of
basic blocks, saving memory allocation in the common case even though it
costs us CPU in the fairly rare case of unnatural loops. Finally, it
fixes the latest crasher reduced out of GCC's single source. Thanks
again to Benjamin Kramer for the reduction, my bugpoint skills failed at
it.
llvm-svn: 144627
Two new TargetInstrInfo hooks lets the target tell ExecutionDepsFix
about instructions with partial register updates causing false unwanted
dependencies.
The ExecutionDepsFix pass will break the false dependencies if the
updated register was written in the previoius N instructions.
The small loop added to sse-domains.ll runs twice as fast with
dependency-breaking instructions inserted.
llvm-svn: 144602
and stores capture) to permit the caller to see each capture point and decide
whether to continue looking.
Use this inside memdep to do an analysis that basicaa won't do. This lets us
solve another devirtualization case, fixing PR8908!
llvm-svn: 144580
instructions of the two-address operands) in order to avoid inserting copies.
This fixes the few regressions introduced when the two-address hack was
disabled (without regressing the improvements).
rdar://10422688
llvm-svn: 144559
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.
I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.
The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.
llvm-svn: 144526
get loop info structures associated with them, and so we need some way
to make forward progress selecting and placing basic blocks. The
technique used here is pretty brutal -- it just scans the list of blocks
looking for the first unplaced candidate. It keeps placing blocks like
this until the CFG becomes tractable.
The cost is somewhat unfortunate, it requires allocating a vector of all
basic block pointers eagerly. I have some ideas about how to simplify
and optimize this, but I'm trying to get the logic correct first.
Thanks to Benjamin Kramer for the reduced test case out of GCC. Sadly
there are other bugs that GCC is tickling that I'm reducing and working
on now.
llvm-svn: 144516
second algorithm, but only loosely. It is more heavily based on the last
discussion I had with Andy. It continues to walk from the inner-most
loop outward, but there is a key difference. With this algorithm we
ensure that as we visit each loop, the entire loop is merged into
a single chain. At the end, the entire function is treated as a "loop",
and merged into a single chain. This chain forms the desired sequence of
blocks within the function. Switching to a single algorithm removes my
biggest problem with the previous approaches -- they had different
behavior depending on which system triggered the layout. Now there is
exactly one algorithm and one basis for the decision making.
The other key difference is how the chain is formed. This is based
heavily on the idea Andy mentioned of keeping a worklist of blocks that
are viable layout successors based on the CFG. Having this set allows us
to consistently select the best layout successor for each block. It is
expensive though.
The code here remains very rough. There is a lot that needs to be done
to clean up the code, and to make the runtime cost of this pass much
lower. Very much WIP, but this was a giant chunk of code and I'd rather
folks see it sooner than later. Everything remains behind a flag of
course.
I've added a couple of tests to exercise the issues that this iteration
was motivated by: loop structure preservation. I've also fixed one test
that was exhibiting the broken behavior of the previous version.
llvm-svn: 144495
SimplifyAddress to handle either a 12-bit unsigned offset or the ARM +/-imm8
offsets (addressing mode 3). This enables a load followed by an integer
extend to be folded into a single load.
For example:
ldrb r1, [r0] ldrb r1, [r0]
uxtb r2, r1 =>
mov r3, r2 mov r3, r1
llvm-svn: 144488
It was off by default.
The new register allocators don't have the problems that made it
necessary to reallocate registers during stack slot coloring.
llvm-svn: 144481
instance and a concrete inlined instance are the use of DW_TAG_subprogram
instead of DW_TAG_inlined_subroutine and the who owns the tree.
We were also omitting DW_AT_inline from the abstract roots. To fix this,
make sure we mark abstract instance roots with DW_AT_inline even when
we have only out-of-line instances referring to them with DW_AT_abstract_origin.
FileCheck is not a very good tool for tests like this, maybe we should add
a -verify mode to llvm-dwarfdump.
llvm-svn: 144441
lead to it trying to re-mark a value marked as a constant with a different value. It also appears to trigger very rarely.
Fixes PR11357.
llvm-svn: 144352
Get the source register that isn't tied to the destination register correct,
even when the assembly source operand order is backwards.
rdar://10428630
llvm-svn: 144322
instruction lower optimization" in the pre-RA scheduler.
The optimization, rather the hack, was done before MI use-list was available.
Now we should be able to implement it in a better way, perhaps in the
two-address pass until a MI scheduler is available.
Now that the scheduler has to backtrack to handle call sequences. Adding
artificial scheduling constraints is just not safe. Furthermore, the hack
is not taking all the other scheduling decisions into consideration so it's just
as likely to pessimize code. So I view disabling this optimization goodness
regardless of PR11314.
llvm-svn: 144267
The TII.foldMemoryOperand hook preserves implicit operands from the
original instruction. This is not what we want when those implicit
operands refer to the register being spilled.
Implicit operands referring to other registers are preserved.
This fixes PR11347.
llvm-svn: 144247
Currently checks alignment and killing stores on a power of 2 boundary as this is likely
to trim the size of the earlier store without breaking large vector stores into scalar ones.
Fixes <rdar://problem/10140300>
llvm-svn: 144239
dragonegg self-host buildbot will recover (it is complaining about object
files differing between different build stages). Original commit message:
Add a hack to the scheduler to disable pseudo-two-address dependencies in
basic blocks containing calls. This works around a problem in which
these artificial dependencies can get tied up in calling seqeunce
scheduling in a way that makes the graph unschedulable with the current
approach of using artificial physical register dependencies for calling
sequences. This fixes PR11314.
llvm-svn: 144188
During the initial RPO traversal of the basic blocks, remember the ones
that are incomplete because of back-edges from predecessors that haven't
been visited yet.
After the initial RPO, revisit all those loop headers so the incoming
DomainValues on the back-edges can be properly collapsed.
This will properly fix execution domains on software pipelined code,
like the included test case.
llvm-svn: 144151
basic blocks containing calls. This works around a problem in which
these artificial dependencies can get tied up in calling seqeunce
scheduling in a way that makes the graph unschedulable with the current
approach of using artificial physical register dependencies for calling
sequences. This fixes PR11314.
llvm-svn: 144124
Add support for trimming constants to GetDemandedBits. This fixes some funky
constant generation that occurs when stores are expanded for targets that don't
support unaligned stores natively.
llvm-svn: 144102
callee's responsibility to sign or zero-extend the return value. The additional
test case just checks to make sure the calls are selected (i.e., -fast-isel-abort
doesn't assert).
llvm-svn: 144047
DomainValues that are only used by "don't care" instructions are now
collapsed to the first possible execution domain after all basic blocks
have been processed. This typically means the PS domain on x86.
For example, the vsel_i64 and vsel_double functions in sse2-blend.ll are
completely collapsed to the PS domain instead of containing a mix of
execution domains created by isel.
llvm-svn: 144037
The xorps instruction is smaller than pxor, so prefer that encoding.
The ExecutionDepsFix pass will switch the encoding to pxor and xorpd
when appropriate.
llvm-svn: 143996
zero-extend the constant integer encoding. Test case provides testing for
both call parameters and materialization of i1, i8, and i16 types.
llvm-svn: 143821
Only currently done if the later store is writing to a power of 2 address or
has the same alignment as the earlier store as then its likely to not break up
large stores into smaller ones
Fixes <rdar://problem/10140300>
llvm-svn: 143630
We've been hitting asserts in this code due to the many supported
combintions of modes (iv-rewrite/no-iv-rewrite) and IV types. This
second rewrite of the code attempts to deal with these cases systematically.
llvm-svn: 143546
it is separating the directory part from the basename of the FileName. Noticed
that this:
.file 1 "dir/foo"
when assembled got the two parts switched. Using the Mac OS X dwarfdump tool
it can be seen easily:
% dwarfdump -a a.out
include_directories[ 1] = 'foo'
Dir Mod Time File Len File Name
---- ---------- ---------- ---------------------------
file_names[ 1] 1 0x00000000 0x00000000 dir
...
Which should be:
...
include_directories[ 1] = 'dir'
Dir Mod Time File Len File Name
---- ---------- ---------- ---------------------------
file_names[ 1] 1 0x00000000 0x00000000 foo
llvm-svn: 143521
with the given predicate, it matches any condition and returns the
predicate - d'oh! Original commit message:
The expression icmp eq (select (icmp eq x, 0), 1, x), 0 folds to false.
Spotted by my super-optimizer in 186.crafty and 450.soplex. We really
need a proper infrastructure for handling generalizations of this kind
of thing (which occur a lot), however this case is so simple that I decided
to go ahead and implement it directly.
llvm-svn: 143318
On x86: (shl V, 1) -> add V,V
Hardware support for vector-shift is sparse and in many cases we scalarize the
result. Additionally, on sandybridge padd is faster than shl.
llvm-svn: 143311
Spotted by my super-optimizer in 186.crafty and 450.soplex. We really
need a proper infrastructure for handling generalizations of this kind
of thing (which occur a lot), however this case is so simple that I decided
to go ahead and implement it directly.
llvm-svn: 143214
fixes: Use a separate register, instead of SP, as the
calling-convention resource, to avoid spurious conflicts with
actual uses of SP. Also, fix unscheduling of calling sequences,
which can be triggered by pseudo-two-address dependencies.
llvm-svn: 143206
Outside an IT block, "add r3, #2" should select a 32-bit wide encoding
rather than generating an error indicating the 16-bit encoding is only
legal in an IT block (outside, the 'S' suffic is required for the 16-bit
encoding).
rdar://10348481
llvm-svn: 143201
Don't assume APInt::getRawData() would hold target-aware endianness nor host-compliant endianness. rawdata[0] holds most lower i64, even on big endian host.
FIXME: Add a testcase for big endian target.
FIXME: Ditto on CompileUnit::addConstantFPValue() ?
llvm-svn: 143194
it fixes the dragonegg self-host (it looks like gcc is miscompiled).
Original commit messages:
Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.
Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.
Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.
Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.
This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.
Delete #if 0 code accidentally left in.
llvm-svn: 143188
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.
Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.
Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.
Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.
This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.
llvm-svn: 143177
using BinaryOperator (which only works for instructions) when it should have
been a cast to OverflowingBinaryOperator (which also works for constants).
While there, correct a few other dubious looking uses of BinaryOperator.
Thanks to Chad Rosier for the testcase. Original commit message:
My super-optimizer noticed that we weren't folding this expression to
true: (x *nsw x) sgt 0, where x = (y | 1). This occurs in 464.h264ref.
llvm-svn: 143125
not depend on In32BitMode. Use the sysexitq mnemonic for the version with the
REX.W prefix and only allow it only In64BitMode. rdar://9738584
llvm-svn: 143112
We were parsing label references to the i12 encoding, which isn't right.
They need to go to the pci variant instead.
More of rdar://10348687
llvm-svn: 143068
MORESTACK_RET_RESTORE_R10; which are lowered to a RET and a RET
followed by a MOV respectively. Having a fake instruction prevents
the verifier from seeing a MachineBasicBlock end with a
non-terminator (MOV). It also prevents the rather eccentric case of a
MachineBasicBlock ending with RET but having successors nevertheless.
Patch by Sanjoy Das.
llvm-svn: 143062
LLVM 2.9. My understanding is that we plan to maintain compatibility with 2.9
until the 3.1 release. At that time we can generate new test cases using LLVM
3.0.
llvm-svn: 142958
bots. Original commit messages:
- Reapply r142781 with fix. Original message:
Enhance SCEV's brute force loop analysis to handle multiple PHI nodes in the
loop header when computing the trip count.
With this, we now constant evaluate:
struct ListNode { const struct ListNode *next; int i; };
static const struct ListNode node1 = {0, 1};
static const struct ListNode node2 = {&node1, 2};
static const struct ListNode node3 = {&node2, 3};
int test() {
int sum = 0;
for (const struct ListNode *n = &node3; n != 0; n = n->next)
sum += n->i;
return sum;
}
- Now that we look at all the header PHIs, we need to consider all the header PHIs
when deciding that the loop has stopped evolving. Fixes miscompile in the gcc
torture testsuite!
llvm-svn: 142919
classifying many edges as exiting which were in fact not. These mainly
formed edges into sub-loops. It was also not correctly classifying all
returning edges out of loops as leaving the loop. With this match most
of the loop heuristics are more rational.
Several serious regressions on loop-intesive benchmarks like perlbench's
loop tests when built with -enable-block-placement are fixed by these
updated heuristics. Unfortunately they in turn uncover some other
regressions. There are still several improvemenst that should be made to
loop heuristics including trip-count, and early back-edge management.
llvm-svn: 142917
the dragonegg and llvm-gcc self-host buildbots. Original commit
messages:
- Reapply r142781 with fix. Original message:
Enhance SCEV's brute force loop analysis to handle multiple PHI nodes in the
loop header when computing the trip count.
With this, we now constant evaluate:
struct ListNode { const struct ListNode *next; int i; };
static const struct ListNode node1 = {0, 1};
static const struct ListNode node2 = {&node1, 2};
static const struct ListNode node3 = {&node2, 3};
int test() {
int sum = 0;
for (const struct ListNode *n = &node3; n != 0; n = n->next)
sum += n->i;
return sum;
}
- Now that we look at all the header PHIs, we need to consider all the header PHIs
when deciding that the loop has stopped evolving. Fixes miscompile in the gcc
torture testsuite!
llvm-svn: 142916
introduce no-return or unreachable heuristics.
The return heuristics from the Ball and Larus paper don't work well in
practice as they pessimize early return paths. The only good hitrate
return heuristics are those for:
- NULL return
- Constant return
- negative integer return
Only the last of these three can possibly require significant code for
the returning block, and even the last is fairly rare and usually also
a constant. As a consequence, even for the cold return paths, there is
little code on that return path, and so little code density to be gained
by sinking it. The places where sinking these blocks is valuable (inner
loops) will already be weighted appropriately as the edge is a loop-exit
branch.
All of this aside, early returns are nearly as common as all three of
these return categories, and should actually be predicted as taken!
Rather than muddy the waters of the static predictions, just remain
silent on returns and let the CFG itself dictate any layout or other
issues.
However, the return heuristic was flagging one very important case:
unreachable. Unfortunately it still gave a 1/4 chance of the
branch-to-unreachable occuring. It also didn't do a rigorous job of
finding those blocks which post-dominate an unreachable block.
This patch builds a more powerful analysis that should flag all branches
to blocks known to then reach unreachable. It also has better worst-case
runtime complexity by not looping through successors for each block. The
previous code would perform an N^2 walk in the event of a single entry
block branching to N successors with a switch where each successor falls
through to the next and they finally fall through to a return.
Test case added for noreturn heuristics. Also doxygen comments improved
along the way.
llvm-svn: 142793
instructions.
This doesn't introduce any optimizations we weren't doing before (except
potentially due to pass ordering issues), now passes will eliminate them sooner
as part of their own cleanups.
llvm-svn: 142787
to bring it under direct test instead of merely indirectly testing it in
the BlockFrequencyInfo pass.
The next step is to start adding tests for the various heuristics
employed, and to start fixing those heuristics once they're under test.
llvm-svn: 142778
discussions with Andy. Fundamentally, the previous algorithm is both
counter productive on several fronts and prioritizing things which
aren't necessarily the most important: static branch prediction.
The new algorithm uses the existing loop CFG structure information to
walk through the CFG itself to layout blocks. It coalesces adjacent
blocks within the loop where the CFG allows based on the most likely
path taken. Finally, it topologically orders the block chains that have
been formed. This allows it to choose a (mostly) topologically valid
ordering which still priorizes fallthrough within the structural
constraints.
As a final twist in the algorithm, it does violate the CFG when it
discovers a "hot" edge, that is an edge that is more than 4x hotter than
the competing edges in the CFG. These are forcibly merged into
a fallthrough chain.
Future transformations that need te be added are rotation of loop exit
conditions to be fallthrough, and better isolation of cold block chains.
I'm also planning on adding statistics to model how well the algorithm
does at laying out blocks based on the probabilities it receives.
The old tests mostly still pass, and I have some new tests to add, but
the nested loops are still behaving very strangely. This almost seems
like working-as-intended as it rotated the exit branch to be
fallthrough, but I'm not convinced this is actually the best layout. It
is well supported by the probabilities for loops we currently get, but
those are pretty broken for nested loops, so this may change later.
llvm-svn: 142743
element types, even though the element extraction code does. It is surprising
that this bug has been here for so long. Fixes <rdar://problem/10318778>.
llvm-svn: 142740
able to constant fold load instructions where the argument is a constant.
Second, we should be able to watch multiple PHI nodes through the loop; this
patch only supports PHIs in loop headers, more can be done here.
With this patch, we now constant evaluate:
static const int arr[] = {1, 2, 3, 4, 5};
int test() {
int sum = 0;
for (int i = 0; i < 5; ++i) sum += arr[i];
return sum;
}
llvm-svn: 142731
Next step in the ongoing saga of NEON load/store assmebly parsing. Handle
VLD1 instructions that take a two-register register list.
Adjust the instruction definitions to only have the single encoded register
as an operand. The super-register from the pseudo is kept as an implicit def,
so passes which come after pseudo-expansion still know that the instruction
defines the other subregs.
llvm-svn: 142670
ZExtPromotedInteger and SExtPromotedInteger based on the operation we legalize.
SetCC return type needs to be legalized via PromoteTargetBoolean.
llvm-svn: 142660
it's a bit more plausible to use this instead of CodePlacementOpt. The
code for this was shamelessly stolen from CodePlacementOpt, and then
trimmed down a bit. There doesn't seem to be much utility in returning
true/false from this pass as we may or may not have rewritten all of the
blocks. Also, the statistic of counting how many loops were aligned
doesn't seem terribly important so I removed it. If folks would like it
to be included, I'm happy to add it back.
This was probably the most egregious of the missing features, and now
I'm going to start gathering some performance numbers and looking at
specific loop structures that have different layout between the two.
Test is updated to include both basic loop alignment and nested loop
alignment.
llvm-svn: 142645
canonical example I used when developing it, and is one of the primary
motivating real-world use cases for __builtin_expect (when burried under
a macro).
I'm working on more test cases here, but I'm trying to make sure both
that the pass is doing the right thing with the test cases and that they
aren't too brittle to changes elsewhere in the code generation pipeline.
Feedback and/or suggestions on how to test this are very welcome.
Especially feedback on whether testing the block comments is a good
strategy; I couldn't find any good examples to steal from but all the
other ideas I had were a lot uglier or more fragile.
llvm-svn: 142644
When checking the availability of instructions using the TLI, a 'promoted'
instruction IS available. It means that the value is bitcasted to another type
for which there is an operation. The correct check for the availablity of an
instruction is to check if it should be expanded.
llvm-svn: 142542
On spec/gcc, this caused a codesize improvement of ~1.9% for ARM mode and ~4.9% for Thumb(2) mode. This is
codesize including literal pools.
The pools themselves doubled in size for ARM mode and quintupled for Thumb mode, leaving suggestion that there
is still perhaps redundancy in LLVM's use of constant pools that could be decreased by sharing entries.
Fixes PR11087.
llvm-svn: 142530
Add a Value named "NAME" to each Record. This will be set to the def or defm
name when instantiating multiclasses. This will replace the #NAME# processing
hack once paste functionality is in place.
llvm-svn: 142518
and switches, with arbitrary numbers of successors. Still optimized for
the common case of 2 successors for a conditional branch.
Add a test case for switch metadata showing up in the BlockFrequencyInfo pass.
llvm-svn: 142493
encoding of probabilities. In the absense of metadata, it continues to
fall back on static heuristics.
This allows __builtin_expect, after lowering through llvm.expect
a branch instruction's metadata, to actually enter the branch
probability model. This is one component of resolving PR2577.
llvm-svn: 142492
layer already had support for printing the results of this analysis, but
the wiring was missing.
Now that printing the analysis works, actually bring some of this
analysis, and the BranchProbabilityInfo analysis that it wraps, under
test! I'm planning on fixing some bugs and doing other work here, so
having a nice place to add regression tests and a way to observe the
results is really useful.
llvm-svn: 142491
svn r139159 caused SelectionDAG::getConstant() to promote BUILD_VECTOR operands
with illegal types, even before type legalization. For this testcase, that led
to one BUILD_VECTOR with i16 operands and another with promoted i32 operands,
which triggered the assertion.
llvm-svn: 142370
.file filenumber "directory" "filename"
This removes one join+split of the directory+filename in MC internals. Because
bitcode files have independent fields for directory and filenames in debug info,
this patch may change the .o files written by existing .bc files.
llvm-svn: 142300
NEON immediates are "interesting". Start of the work to handle parsing them
in an 'as' compatible manner. Getting the matcher to play nicely with
these and the floating point immediates from VFP is an extra fun wrinkle.
llvm-svn: 142293
combining of the landingpad instruction. The ObjC personality function acts
almost identically to the C++ personality function. In particular, it uses
"null" as a "catch-all" value.
llvm-svn: 142256
Some code want to check that *any* call within a function has the 'returns
twice' attribute, not just that the current function has one.
llvm-svn: 142221
profile metadata at the same time. Use it to preserve metadata attached
to a branch when re-writing it in InstCombine.
Add metadata to the canonicalize_branch InstCombine test, and check that
it is tranformed correctly.
Reviewed by Nick Lewycky!
llvm-svn: 142168
The decision was to pack the bits. Currently no codegen supports this.
Currently, all of the bits in the vector are saved into the same address
in memory.
llvm-svn: 142149
the X86 asmparser to produce ranges in the one case that was annoying me, for example:
test.s:10:15: error: invalid operand for instruction
movl 0(%rax), 0(%edx)
^~~~~~~
It should be straight-forward to enhance filecheck, tblgen, and/or the .ll parser to use
ranges where appropriate if someone is interested.
llvm-svn: 142106
Just because we're dealing with a GEP doesn't mean we can assert the
SCEV has a pointer type. The fix is simply to ignore the SCEV pointer
type, which we really didn't need.
Fixes PR11138 webkit crash.
llvm-svn: 142058