All libm floating-point rounding functions, except for round(), had their own
ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm
adding ISD::FROUND so that round() can be custom lowered as well.
For the most part, this is straightforward. I've added an intrinsic
and a matching ISD node just like those for nearbyint() and friends. The
SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed
fround).
This will be used by the PowerPC backend in a follow-up commit.
llvm-svn: 187926
This fix is very lightweight. The same fix already existed for AddRec
but was missing for NAry expressions.
This is obviously an improvement and I'm unsure how to test compile
time problems.
Patch by Xiaoyi Guo!
llvm-svn: 187475
Call into ComputeMaskedBits to figure out which bits are set on both add
operands and determine if the value is a power-of-two-or-zero or not.
llvm-svn: 187445
Adds unit tests for it too.
Split BasicBlockUtils into an analysis-half and a transforms-half, and put the
analysis bits into a new Analysis/CFG.{h,cpp}. Promote isPotentiallyReachable
into llvm::isPotentiallyReachable and move it into Analysis/CFG.
llvm-svn: 187283
Merge consecutive if-regions if they contain identical statements.
Both transformations reduce number of branches. The transformation
is guarded by a target-hook, and is currently enabled only for +R600,
but the correctness has been tested on X86 target using a variety of
CPU benchmarks.
Patch by: Mei Ye
llvm-svn: 187278
The great thing about the SCEVAddRec No-Wrap flag (unlike nsw/nuw) is
that is can be preserved while normalizing (reassociating and
factoring).
The bad thing is that is can't be tranfered back to IR, which is one
of the reasons I don't like the concept of SCEVExpander.
Sorry, I can't think of a direct way to test this, which is why these
were FIXMEs for so long. I just think it's a good time to finally
clean it up.
llvm-svn: 186273
Address calculation for gather/scather in vectorized code can incur a
significant cost making vectorization unbeneficial. Add infrastructure to add
cost.
Tests and cost model for targets will be in follow-up commits.
radar://14351991
llvm-svn: 186187
ScalarEvolution::getSignedRange uses ComputeNumSignBits from ValueTracking on
ashr instructions. ComputeNumSignBits can return zero, but this case was not
handled correctly by the code in getSignedRange which was calling:
APInt::getSignedMinValue(BitWidth).ashr(NS - 1)
with NS = 0, resulting in an assertion failure in APInt::ashr.
Now, we just return the conservative result (as with NS == 1).
Another bug found by llvm-stress.
llvm-svn: 185955
(add nsw x, (and x, y)) isn't a power of two if x is zero, it's zero
(add nsw x, (xor x, y)) isn't a power of two if y has bits set that aren't set in x
llvm-svn: 185954
The symptom is seg-fault, and the root cause is that a SCEV contains a SCEVUnknown
which has null-pointer to a llvm::Value.
This is how the problem take place:
===================================
1). In the pristine input IR, there are two relevant instrutions Op1 and Op2,
Op1's corresponding SCEV (denoted as SCEV(op1)) is a SCEVUnknown, and
SCEV(Op2) contains SCEV(Op1). None of these instructions are dead.
Op1 : V1 = ...
...
Op2 : V2 = ... // directly or indirectly (data-flow) depends on Op1
2) Optimizer (LSR in my case) generates an instruction holding the equivalent
value of Op1, making Op1 dead.
Op1': V1' = ...
Op1: V1 = ... ; now dead)
Op2 : V2 = ... //Now deps on Op1', but the SCEV(Op2) still contains SCEV(Op1)
3) Op1 is deleted, and call-back function is called to reset
SCEV(Op1) to indicate it is invalid. However, SCEV(Op2) is not
invalidated as well.
4) Following pass get the cached, invalid SCEV(Op2), and try to manipulate it,
and cause segfault.
The fix:
========
It seems there is no clean yet inexpensive fix. I write to dev-list
soliciting good solution, unforunately no ack. So, I decide to fix this
problem in a brute-force way:
When ScalarEvolution::getSCEV is called, check if the cached SCEV
contains a invalid SCEVUnknow, if yes, remove the cached SCEV, and
re-evaluate the SCEV from scratch.
I compile buch of big *.c and *.cpp, fortunately, I don't see any increase
in compile time.
Misc:
=====
The reduced test-case has 2357 lines of code+other-stuff, too big to commit.
rdar://14283433
llvm-svn: 185843
The Builtin attribute is an attribute that can be placed on function call site that signal that even though a function is declared as being a builtin,
rdar://problem/13727199
llvm-svn: 185049
This is a band-aid to fix the most severe regressions we're seeing from basing
spill decisions on block frequencies, until we have a better solution.
llvm-svn: 184835
Fixes rdar:14036816, PR16130.
There is an opportunity to compute precise trip counts for 'or'
expressions and multi-exit loops.
rdar:14038809: Optimize trip count computation for multi-exit loops.
To do this we need to record the fact that ExitLimit assumes NSW. When
it does not we can safely assume that the loop trip count is the
minimum ExitLimt across all subexpressions and loop exits.
llvm-svn: 183060
Account for the cost of scaling factor in Loop Strength Reduce when rating the
formulae. This uses a target hook.
The default implementation of the hook is: if the addressing mode is legal, the
scaling factor is free.
<rdar://problem/13806271>
llvm-svn: 183045
Fixes PR16130 - clang produces incorrect code with loop/expression at -O2.
This is a 2+ year old bug that's now holding up the release. It's a
case where we knowingly made aggressive assumptions about undefined
behavior. These assumptions are wrong when SCEV is computing a
subexpression that does not directly control the branch. With this
fix, we avoid making assumptions in those cases but still optimize the
common case. SCEV's trip count computation for exits controlled by
'or' expressions is now analagous to the trip count computation for
loops with multiple exits. I had already fixed the multiple exit case
to be conservative.
llvm-svn: 182989
- llvm.loop.parallel metadata has been renamed to llvm.loop to be more generic
by making the root of additional loop metadata.
- Loop::isAnnotatedParallel now looks for llvm.loop and associated
llvm.mem.parallel_loop_access
- document llvm.loop and update llvm.mem.parallel_loop_access
- add support for llvm.vectorizer.width and llvm.vectorizer.unroll
- document llvm.vectorizer.* metadata
- add utility class LoopVectorizerHints for getting/setting loop metadata
- use llvm.vectorizer.width=1 to indicate already vectorized instead of
already_vectorized
- update existing tests that used llvm.loop.parallel and
llvm.vectorizer.already_vectorized
Reviewed by: Nadav Rotem
llvm-svn: 182802
Other than recognizing the attribute, the patch does little else.
It changes the branch probability analyzer so that edges into
blocks postdominated by a cold function are given low weight.
Added analysis and code generation tests. Added documentation for the
new attribute.
llvm-svn: 182638