Commit Graph

7 Commits

Author SHA1 Message Date
Evan Cheng ebe47c872f Avoid using f64 to lower memcpy from constant string. It's cheaper to use i32 store of immediates.
llvm-svn: 100751
2010-04-08 07:37:57 +00:00
Evan Cheng f997c31598 In 64-bit mode, use i64 to lower memcpy / memset instead of f64.
llvm-svn: 100137
2010-04-01 20:27:45 +00:00
Evan Cheng 1e8ee79957 Add -mcpu to memcpy / memset tests to ensure they behave the same on all hosts / targets.
llvm-svn: 100101
2010-04-01 08:25:26 +00:00
Evan Cheng 43cd9e3845 Fix sdisel memcpy, memset, memmove lowering:
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
   load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
   ops more effectively.
rdar://7774704

llvm-svn: 100090
2010-04-01 06:04:33 +00:00
Dan Gohman 40503396da Eliminate more uses of llvm-as and llvm-dis.
llvm-svn: 81290
2009-09-08 23:54:48 +00:00
Nick Lewycky 213e114a2c The Linux ABI emits an extra "movl %esp, %ebp" in function prologue and
sometimes a "mov %ebp, %esp" in the epilogue.

Force these tests that rely on counting 'mov' to use i686-apple-darwin8.8.0
where they were written.

llvm-svn: 51568
2008-05-26 20:18:56 +00:00
Evan Cheng a1100782d5 Add a couple of test cases.
llvm-svn: 51441
2008-05-22 21:19:19 +00:00