Use the new parameter on Function::getIntrinsicID to identify cases where
a function is being called with an "llvm." name but it isn't actually an
intrinsic. In such cases generate an error.
llvm-svn: 36121
Don't assert everytime an intrinsic name isn't recognized. Instead, make
the assert optional when callin getIntrinsicID(). This allows the assembler
to handle invalid intrinsic names gracefully.
llvm-svn: 36120
This sinks the two stores in this example into a single store in cond_next. In this
case, it allows elimination of the load as well:
store double 0.000000e+00, double* @s.3060
%tmp3 = fcmp ogt double %tmp1, 5.000000e-01 ; <i1> [#uses=1]
br i1 %tmp3, label %cond_true, label %cond_next
cond_true: ; preds = %entry
store double 1.000000e+00, double* @s.3060
br label %cond_next
cond_next: ; preds = %entry, %cond_true
%tmp6 = load double* @s.3060 ; <double> [#uses=1]
This implements Transforms/InstCombine/store-merge.ll:test2
llvm-svn: 36040
out to do! :)
This fixes a problem where LSR would insert a bunch of code into each MBB
that uses a particular subexpression (e.g. IV+base+C). The problem is that
this code cannot be CSE'd back together if inserted into different blocks.
This patch changes LSR to attempt to insert a single copy of this code and
share it, allowing codegenprepare to duplicate the code if it can be sunk
into various addressing modes. On CodeGen/ARM/lsr-code-insertion.ll,
for example, this gives us code like:
add r8, r0, r5
str r6, [r8, #+4]
..
ble LBB1_4 @cond_next
LBB1_3: @cond_true
str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
ldr r6, LCPI1_1
str r6, [r8, #+4]
instead of:
add r10, r0, r6
str r8, [r10, #+4]
...
ble LBB1_4 @cond_next
LBB1_3: @cond_true
add r8, r0, r6
str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
add r8, r0, r6
ldr r10, LCPI1_1
str r10, [r8, #+4]
Besides being smaller and more efficient, this makes it immediately
obvious that it is profitable to predicate LBB1_3 now :)
llvm-svn: 35972
this fixes problems where codegenprepare would sink expressions into load/stores
that are not valid, and fixes cases where it would miss important valid ones.
This fixes several serious codesize and perf issues, particularly on targets
with complex addressing modes like arm and x86. For example, now we compile
CodeGen/X86/isel-sink.ll to:
_test:
movl 8(%esp), %eax
movl 4(%esp), %ecx
cmpl $1233, %eax
ja LBB1_2 #F
LBB1_1: #T
movl $4, (%ecx,%eax,4)
movl $141, %eax
ret
LBB1_2: #F
movl (%ecx,%eax,4), %eax
ret
instead of:
_test:
movl 8(%esp), %eax
leal (,%eax,4), %ecx
addl 4(%esp), %ecx
cmpl $1233, %eax
ja LBB1_2 #F
LBB1_1: #T
movl $4, (%ecx)
movl $141, %eax
ret
LBB1_2: #F
movl (%ecx), %eax
ret
llvm-svn: 35970
This can happen for intrinsics that are overloaded. In such cases it is
necessary to emit a function prototype before the body of the function
that calls the intrinsic and to ensure we don't emit it multiple times.
llvm-svn: 35954
class supports. In the case of vectors, this means we often get the wrong
type (e.g. we get v4f32 instead of v8i16). Make sure to convert the vector
result to the right type. This fixes CodeGen/X86/2007-04-11-InlineAsmVectorResult.ll
llvm-svn: 35944