forked from OSchip/llvm-project
ce5f841bb5
they are produced by calls (which are known exact) and by cross block copies which are known to be produced by extends. This improves: define double @test2() { %tmp85 = call double asm sideeffect "fld0", "={st(0)}"() ret double %tmp85 } from: _test2: subl $20, %esp # InlineAsm Start fld0 # InlineAsm End fstpl 8(%esp) movsd 8(%esp), %xmm0 movsd %xmm0, (%esp) fldl (%esp) addl $20, %esp #FP_REG_KILL ret to: _test2: # InlineAsm Start fld0 # InlineAsm End #FP_REG_KILL ret by avoiding a f64 <-> f80 trip llvm-svn: 48108 |
||
---|---|---|
.. | ||
Analysis | ||
Archive | ||
AsmParser | ||
Bitcode | ||
CodeGen | ||
Debugger | ||
ExecutionEngine | ||
Linker | ||
Support | ||
System | ||
Target | ||
Transforms | ||
VMCore | ||
Makefile |