Convert the internal representation of the Attributes class into a pointer to an
opaque object that's uniqued by and stored in the LLVMContext object. The
Attributes class then becomes a thin wrapper around this opaque
object. Eventually, the internal representation will be expanded to include
attributes that represent code generation options, etc.
llvm-svn: 165917
This patch migrates the strcmp and strncmp optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.
llvm-svn: 165915
Erasing from the beginning or middle of the vector is expensive, remove_if can
do it in linear time even though it's a bit ugly without lambdas.
No functionality change.
llvm-svn: 165903
The new coalescer can merge a dead def into an unused lane of an
otherwise live vector register.
Clear the <dead> flag when that happens since the flag refers to the
full virtual register which is still live after the partial dead def.
This fixes PR14079.
llvm-svn: 165877
This patch migrates the strchr and strrchr optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.
llvm-svn: 165875
This patch migrates the strcat and strncat optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.
llvm-svn: 165874
This patch implements the new LibCallSimplifier class as outlined in [1].
In addition to providing the new base library simplification infrastructure,
all the fortified library call simplifications were moved over to the new
infrastructure. The rest of the library simplification optimizations will
be moved over with follow up patches.
NOTE: The original fortified library call simplifier located in the
SimplifyFortifiedLibCalls class was not removed because it is still
used by CodeGenPrepare. This class will eventually go away too.
[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-August/052283.html
llvm-svn: 165873
It is possible that the live range of the value being pruned loops back
into the kill MBB where the search started. When that happens, make sure
that the beginning of KillMBB is also pruned.
Instead of starting a DFS at KillMBB and skipping the root of the
search, start a DFS at each KillMBB successor, and allow the search to
loop back to KillMBB.
This fixes PR14078.
llvm-svn: 165872
type coercion code, especially when targetting ARM. Things like [1
x i32] instead of i32 are very common there.
The goal of this logic is to ensure that when we are picking an alloca
type, we look through such wrapper aggregates and across any zero-length
aggregate elements to find the simplest type possible to form a type
partition.
This logic should (generally speaking) rarely fire. It only ends up
kicking in when an alloca is accessed using two different types (for
instance, i32 and float), and the underlying alloca type has wrapper
aggregates around it. I noticed a significant amount of this occurring
looking at stepanov_abstraction generated code for arm, and suspect it
happens elsewhere as well.
Note that this doesn't yet address truly heinous IR productions such as
PR14059 is concerning. Those result in mismatched *sizes* of types in
addition to mismatched access and alloca types.
llvm-svn: 165870
help the dragonegg builders, and no test case at this point, but this
was one dimly plausible case I spotted by inspection. Hopefully will get
a testcase from those bots soon-ish, and will tidy this up with proper
testing.
llvm-svn: 165869
X86 doesn't have i8 cmovs so isel would emit a branch. Emitting branches at this
level is often not a good idea because it's too late for many optimizations to
kick in. This solution doesn't add any extensions (truncs are free) and tries
to avoid introducing partial register stalls by filtering direct copyfromregs.
I'm seeing a ~10% speedup on reading a random .png file with libpng15 via
graphicsmagick on x86_64/westmere, but YMMV depending on the microarchitecture.
llvm-svn: 165868
are single value types, the load and store should be directly based upon
the alloca and then bitcasting can fix the type as needed afterward.
This might in theory improve some of the IR coming out of SROA, but
I don't expect big changes yet and don't have any test cases on hand.
This is really just a cleanup/refactoring patch. The next patch will
cause this code path to be hit a lot more, actually get SROA to promote
more allocas and include several more test cases.
llvm-svn: 165864
the interface between the front-end and the MC layer when parsing inline
assembly. Unfortunately, this is too deep into the parsing stack. Specifically,
we're unable to handle target-independent assembly (i.e., assembly directives,
labels, etc.). Note the MatchAndEmitInstruction() isn't the correct
abstraction either. I'll be exposing target-independent hooks shortly, so this
is really just a cleanup.
llvm-svn: 165858
local frame causes problem.
For example:
void f(StructToPass s) {
g(&s, sizeof(s));
}
will cause problem with tail-call since part of s is passed via registers and
saved in f's local frame. When g tries to access s, part of s may be corrupted
since f's local frame is popped out before the tail-call.
The current fix is to disable tail-call if getVarArgsRegSaveSize is not 0 for
the caller. This is a conservative approach, if we can prove the address of
s or part of s is not taken and passed to g, it should be okay to perform
tail-call.
rdar://12442472
llvm-svn: 165853
The backend already pattern matches to form VBSL when it can. We may want to
teach it to use the vbsl intrinsics at some point to prevent machine licm from
mucking with this, but using the Expand is completely correct.
http://llvm.org/bugs/show_bug.cgi?id=13831http://llvm.org/bugs/show_bug.cgi?id=13961
Patch by Peter Couperus <peter.couperus@st.com>.
llvm-svn: 165845
Completely update one interval at a time instead of collecting live
range fragments to be updated. This avoids building data structures,
except for a single SmallPtrSet of updated intervals.
Also share code between handleMove() and handleMoveIntoBundle().
Add support for moving dead defs across other live values in the
interval. The MI scheduler can do that.
llvm-svn: 165824
PHIElimination inserts IMPLICIT_DEF instructions to guarantee that all
PHI predecessors have a live-out value. These IMPLICIT_DEF values are
not considered to be real interference when coalescing virtual
registers:
%vreg1 = IMPLICIT_DEF
%vreg2 = MOV32r0
When joining %vreg1 and %vreg2, the IMPLICIT_DEF instruction and its
value number should simply be erased since the %vreg2 value number now
provides a live-out value for the PHI predecesor block.
llvm-svn: 165813
This is a temporary hack until Bill's project to record command line options
in the LLVM IR is ready. Clang currently sets a default CPU but that isn't
recorded anywhere and it doesn't get used in the final LTO compilation.
llvm-svn: 165809
On PowerPC, a bitcast of <16 x i8> to i128 may run through a code
path in ExpandRes_BITCAST that attempts to do an intermediate
bitcast to a <4 x i32> vector, and then construct the Hi and Lo parts
of the resulting i128 by pairing up two of those i32 vector elements
each. The code already recognizes that on a big-endian system, the
first two vector elements form the Hi part, and the final two vector
elements form the Lo part (vice-versa from the little-endian situation).
However, we also need to take endianness into account when forming each
of those separate pairs: on a big-endian system, vector element 0 is
the *high* part of the pair making up the Hi part of the result, and
vector element 1 is the low part of the pair. The code currently always
uses vector element 0 as the low part and vector element 1 as the high
part, as is appropriate for little-endian platforms only.
This patch fixes this by swapping the vector elements as they are
paired up as appropriate.
llvm-svn: 165802
DependenceAnalysis.cpp:1164:32: warning: implicit truncation from 'int' to bitfield changes value from -5 to 3
[-Wconstant-conversion]
Result.DV[Level].Direction &= ~Dependence::DVEntry::GT;
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Patch from Preston Briggs <preston.briggs@gmail.com>.
llvm-svn: 165784
not legal. However, it should use a div instruction + mul + sub if divide is
legal. The rem legalization code was missing a check and incorrectly uses a
divrem libcall even when div is legal.
rdar://12481395
llvm-svn: 165778
The intent of this document is to be the go-to document for anybody who
wants to write new documentation, but isn't familiar with Sphinx.
llvm-svn: 165775
* Fix confusing explanation regarding abstract classes.
* Clarify auto-upcasting and why `Shape` doesn't need a `classof()`.
* Add section `Rules of Thumb` with some quick summary tips.
llvm-svn: 165768
Recent changes to isa<>/dyn_cast<> have made unnecessary those classof()
of the form:
static bool classof(const Foo *) { return true; }
Accordingly, remove mention of such classof() from the documentation.
llvm-svn: 165766
Additionally, all such cases are handled with no dynamic check.
All `classof()` of the form
class Foo {
[...]
static bool classof(const Bar *) { return true; }
[...]
}
where Foo is an ancestor of Bar are no longer necessary.
Don't write them!
Note: The exact test is `is_base_of<Foo, Bar>`, which is non-strict, so
that Foo is considered an ancestor of itself.
This leads to the following rule of thumb for LLVM-style RTTI:
The argument type of `classof()` should be a strict ancestor.
For more information about implementing LLVM-style RTTI, see
docs/HowToSetUpLLVMStyleRTTI.rst
llvm-svn: 165765
This classof() is effectively saying that a MachineCodeEmitter "is-a"
JITEmitter, but JITEmitter is in fact a descendant of
MachineCodeEmitter, so this is not semantically correct. Consequently,
none of the assertions that rely on these classof() actualy check
anything.
Remove the RTTI (which didn't actually check anything) and use
static_cast<> instead.
Post-Mortem Bug Analysis
========================
Cause of the bug
----------------
r55022 appears to be the source of the classof() and assertions removed
by this commit. It aimed at removing some dynamic_cast<> that were
solely in the assertions. A typical diff hunk from that commit looked
like:
- assert(dynamic_cast<JITEmitter*>(MCE) && "Unexpected MCE?");
- JITEmitter *JE = static_cast<JITEmitter*>(getCodeEmitter());
+ assert(isa<JITEmitter>(MCE) && "Unexpected MCE?");
+ JITEmitter *JE = cast<JITEmitter>(getCodeEmitter());
Hence, the source of the bug then seems to be an attempt to replace
dynamic_cast<> with LLVM-style RTTI without properly setting up the
class hierarchy for LLVM-style RTTI. The bug therefore appears to be
simply a "thinko".
What initially indicated the presence of the bug
------------------------------------------------
After implementing automatic upcasting for isa<>, classof() functions of
the form
static bool classof(const Foo *) { return true; }
were removed, since they only serve the purpose of optimizing
statically-OK upcasts. A subsequent recompilation triggered a build
failure on the isa<> tests within the removed asserts, since the
automatic upcasting (correctly) failed to substitute this classof().
Key to pinning down the root cause of the bug
---------------------------------------------
After being alerted to the presence of the bug, some thought about the
semantics which were being asserted by the buggy classof() revealed that
it was incorrect.
How the bug could have been prevented
-------------------------------------
This bug could have been prevented by better documentation for how to
set up LLVM-style RTTI. This should be solved by the recently added
documentation HowToSetUpLLVMStyleRTTI. However, this bug suggests that
the documentation should clearly explain the contract that classof()
must fulfill. The HowToSetUpLLVMStyleRTTI already explains this
contract, but it is a little tucked away. A future patch will expand
that explanation and make it more prominent.
There does not appear to be a simple way to have the compiler prevent
this bug, since fundamentally it boiled down to a spurious classof()
where the programmer made an erroneous statement about the conversion.
This suggests that perhaps the interface to LLVM-style RTTI of classof()
is not the best. There is already some evidence for this, since in a
number of places Clang has classof() forward to classofKind(Kind K)
which evaluates the cast in terms of just the Kind. This could probably
be generalized to simply a `static const Kind MyKind;` field in leaf
classes and `static const Kind firstMyKind, lastMyKind;` for non-leaf
classes, and have the rest of the work be done inside Casting.h,
assuming that the Kind enum is laid out in a preorder traversal of the
inheritance tree.
llvm-svn: 165764
When all cases of a switch statement are dead, the weights vector only has one
element, and we will get an ssertion failure when calling createBranchWeights.
llvm-svn: 165759
to the instruction position. The old encoding would give an absolute
ID which counts up within a function, and only resets at the next function.
I.e., Instead of having:
... = icmp eq i32 n-1, n-2
br i1 ..., label %bb1, label %bb2
it will now be roughly:
... = icmp eq i32 1, 2
br i1 1, label %bb1, label %bb2
This makes it so that ids remain relatively small and can be encoded
in fewer bits.
With this encoding, forward reference operands will be given
negative-valued IDs. Use signed VBRs for the most common case
of forward references, which is phi instructions.
To retain backward compatibility we bump the bitcode version
from 0 to 1 to distinguish between the different encodings.
llvm-svn: 165739
Not all instructions define a virtual register in their first operand.
Specifically, INLINEASM has a different format.
<rdar://problem/12472811>
llvm-svn: 165721
For function calls on the 64-bit PowerPC SVR4 target, each parameter
is mapped to as many doublewords in the parameter save area as
necessary to hold the parameter. The first 13 non-varargs
floating-point values are passed in registers; any additional
floating-point parameters are passed in the parameter save area. A
single-precision floating-point parameter (32 bits) must be mapped to
the second (rightmost, low-order) word of its assigned doubleword
slot.
Currently LLVM violates this ABI requirement by mapping such a
parameter to the first (leftmost, high-order) word of its assigned
doubleword slot. This is internally self-consistent but will not
interoperate correctly with libraries compiled with an ABI-compliant
compiler.
This patch corrects the problem by adjusting the parameter addressing
on both sides of the calling convention.
llvm-svn: 165714
Note: [D]M{T,F}CP2 is just a recommended encoding. Vendors often provide a
custom CP2 that interprets instructions differently and may wish to add their
own instructions that use this opcode. We should ensure that this is easy to
do. I will probably add a 'has custom CP{0-3}' subtarget flag to make this
easy: We want to avoid the GCC situation where every MIPS vendor makes a custom
fork that breaks every other MIPS CPU and so can't be merged upstream.
llvm-svn: 165711
Patch from Preston Briggs <preston.briggs@gmail.com>.
This is an updated version of the dependence-analysis patch, including an MIV
test based on Banerjee's inequalities.
It's a fairly complete implementation of the paper
Practical Dependence Testing
Gina Goff, Ken Kennedy, and Chau-Wen Tseng
PLDI 1991
It cannot yet propagate constraints between coupled RDIV subscripts (discussed
in Section 5.3.2 of the paper).
It's organized as a FunctionPass with a single entry point that supports testing
for dependence between two instructions in a function. If there's no dependence,
it returns null. If there's a dependence, it returns a pointer to a Dependence
which can be queried about details (what kind of dependence, is it loop
independent, direction and distance vector entries, etc). I haven't included
every imaginable feature, but there's a good selection that should be adequate
for supporting many loop transformations. Of course, it can be extended as
necessary.
Included in the patch file are many test cases, commented with C code showing
the loops and array references.
llvm-svn: 165708
value but later turns out to be a function.
Unfortunately, we can't fold tests into a single file because we only get one
error out of llvm-as.
llvm-svn: 165680
Original message:
The attached is the fix to radar://11663049. The optimization can be outlined by following rules:
(select (x != c), e, c) -> select (x != c), e, x),
(select (x == c), c, e) -> select (x == c), x, e)
where the <c> is an integer constant.
The reason for this change is that : on x86, conditional-move-from-constant needs two instructions;
however, conditional-move-from-register need only one instruction.
While the LowerSELECT() sounds to be the most convenient place for this optimization, it turns out to be a bad place. The reason is that by replacing the constant <c> with a symbolic value, it obscure some instruction-combining opportunities which would otherwise be very easy to spot. For that reason, I have to postpone the change to last instruction-combining phase.
The change passes the test of "make check-all -C <build-root/test" and "make -C project/test-suite/SingleSource".
llvm-svn: 165661
the compiler makes use of GPR0. However, there are two flavors of
GPR0 defined by the target: the 32-bit GPR0 (R0) and the 64-bit GPR0
(X0). The spill/reload code makes use of R0 regardless of whether we
are generating 32- or 64-bit code.
This patch corrects the problem in the obvious manner, using X0 and
ADDI8 for 64-bit and R0 and ADDI for 32-bit.
llvm-svn: 165658
the Altivec extensions were introduced. Its use is optional, and
allows the compiler to communicate to the operating system which
vector registers should be saved and restored during a context switch.
In practice, this information is ignored by the various operating
systems using the SVR4 ABI; the kernel saves and restores the entire
register state. Setting the VRSAVE register is no longer performed by
the AIX XL compilers, the IBM i compilers, or by GCC on Power Linux
systems. It seems best to avoid this logic within LLVM as well.
This patch avoids generating code to update and restore VRSAVE for the
PowerPC SVR4 ABIs (32- and 64-bit). The code remains in place for the
Darwin ABI.
llvm-svn: 165656
The minimum set of required instructions is ISD::AND, ISD::OR, ISD::SETO(or ISD::SETOEQ) and ISD::SETUO(or ISD::SETUNE). Everything is expanded into one of two patterns:
Pattern 1: (LHS CC1 RHS) Opc (LHS CC2 RHS)
Pattern 2: (LHS CC1 LHS) Opc (RHS CC2 RHS)
llvm-svn: 165655
Some of these dyn_cast<>'s would be better phrased as isa<> or cast<>.
That will happen in a future patch.
There are also two dyn_cast_or_null<>'s slipped in instead of
dyn_cast<>'s, since they were causing crashes with just dyn_cast<>.
llvm-svn: 165646
Hypothesis 1: use of `.. code::` directive instead of `.. code-block::`
is causing Sphinx to discard the block. On my machine, `.. code::`
renders fine. However, I don't think that `.. code::` is actually a
legit Sphinx directive. I believe that on my machine Sphinx is falling
back to just displaying it monospace with no syntax, whereas llvm.org's
Sphinx is just discarding it.
This is truly "remote debugging" since I can't reproduce this on my
machine. It would be helpful to be able to see the llvm.org Sphinx
build logs; if that's possible please let me know.
llvm-svn: 165632
- Due to the current matching vector elements constraints in
ISD::FP_ROUND, rounding from v2f64 to v4f32 (after legalization from
v2f32) is scalarized. Add a customized v2f32 widening to convert it
into a target-specific X86ISD::VFPROUND to work around this
constraints.
llvm-svn: 165631
- Due to the current matching vector elements constraints in ISD::FP_EXTEND,
rounding from v2f32 to v2f64 is scalarized. Add a customized v2f32 widening
to convert it into a target-specific X86ISD::VFPEXT to work around this
constraints. This patch also reverts a previous attempt to fix this issue by
recovering the scalarized ISD::FP_EXTEND pattern and thus significantly
reduces the overhead of supporting non-power-2 vector FP extend.
llvm-svn: 165625
SDNode for LDRB_POST_IMM is invalid: number of registers added to SDNode fewer
that described in .td.
7 ops is needed, but SDNode with only 6 is created.
In more details:
In ARMInstrInfo.td, in multiclass AI2_ldridx, in definition _POST_IMM, offset
operand is defined as am2offset_imm. am2offset_imm is complex parameter type,
and actually it consists from dummy register and imm itself. As I understood
trick with dummy reg was made for AsmParser. In ARMISelLowering.cpp, this dummy
register was not added to SDNode, and it cause crash in Peephole Optimizer pass.
The problem fixed by setting up additional dummy reg when emitting
LDRB_POST_IMM instruction.
llvm-svn: 165617
SchedulerDAGInstrs::buildSchedGraph ignores dependencies between FixedStack
objects and byval parameters. So loading byval parameters from stack may be
inserted *before* it will be stored, since these operations are treated as
independent.
Fix:
Currently ARMTargetLowering::LowerFormalArguments saves byval registers with
FixedStack MachinePointerInfo. To fix the problem we need to store byval
registers with MachinePointerInfo referenced to first the "byval" parameter.
Also commit adds two new fields to the InputArg structure: Function's argument
index and InputArg's part offset in bytes relative to the start position of
Function's argument. E.g.: If function's argument is 128 bit width and it was
splitted onto 32 bit regs, then we got 4 InputArg structs with same arg index,
but different offset values.
llvm-svn: 165616
Allows the new machine model to be used for NumMicroOps and OutputLatency.
Allows the HazardRecognizer to be disabled along with itineraries.
llvm-svn: 165603