Eli Friedman
02e737b08e
Move "atomic" and "volatile" designations on instructions after the opcode
...
of the instruction.
Note that this change affects the existing non-atomic load and store
instructions; the parser now accepts both forms, and the change is noted
in the release notes.
llvm-svn: 137527
2011-08-12 22:50:01 +00:00
Chris Lattner
5756c16cdf
make the asmparser reject function and type redefinitions. 'Merging' hasn't been
...
needed since llvm-gcc 3.4 days.
llvm-svn: 133248
2011-06-17 07:06:44 +00:00
Benjamin Kramer
fda5dc4968
Revert "InstCombine: Turn mul.with.overflow(X, 2) into the cheaper add.with.overflow(X, X)"
...
It's better to do this in codegen, mul.with.overflow(X, 2) is more canonical because it has only one use on "X".
llvm-svn: 131798
2011-05-21 18:31:42 +00:00
Benjamin Kramer
691731eb9c
InstCombine: Turn mul.with.overflow(X, 2) into the cheaper add.with.overflow(X, X)
...
llvm-svn: 131789
2011-05-21 09:22:06 +00:00
Eli Friedman
49346010f8
More instcombine cleanup aimed towards improving debug line info.
...
llvm-svn: 131559
2011-05-18 19:57:14 +00:00
Benjamin Kramer
b49b964b98
InstCombine: Turn umul_with_overflow into mul nuw if we can prove that it cannot overflow.
...
This happens a lot in clang-compiled C++ code because it adds overflow checks to operator new[]:
unsigned *foo(unsigned n) { return new unsigned[n]; }
We can optimize away the overflow check on 64 bit targets because (uint64_t)n*4 cannot overflow.
llvm-svn: 127418
2011-03-10 18:40:14 +00:00
Chris Lattner
1e8c032a6e
X86 supports i8/i16 overflow ops (except i8 multiplies), we should
...
generate them.
Now we compile:
define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
%0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
%cmp = extractvalue %0 %0, 1
br i1 %cmp, label %if.then, label %if.end
into:
_X: ## @X
## BB#0: ## %entry
subl $12, %esp
movb 16(%esp), %al
addb 20(%esp), %al
jo LBB0_2
Before we were generating:
_X: ## @X
## BB#0: ## %entry
pushl %ebp
movl %esp, %ebp
subl $8, %esp
movb 12(%ebp), %al
testb %al, %al
setge %cl
movb 8(%ebp), %dl
testb %dl, %dl
setge %ah
cmpb %cl, %ah
sete %cl
addb %al, %dl
testb %dl, %dl
setge %al
cmpb %al, %ah
setne %al
andb %cl, %al
testb %al, %al
jne LBB0_2
llvm-svn: 122186
2010-12-19 20:03:11 +00:00
Chris Lattner
33dc3f0cfa
optimize uadd(x, cst) into a comparison when the normal
...
result is dead. This is required for my next patch to not
regress the testsuite.
llvm-svn: 122181
2010-12-19 19:35:32 +00:00
Eli Friedman
f99e7e6643
PR7853: fix a silly mistake introduced in r101899, and add a test to make sure
...
it doesn't regress again.
llvm-svn: 110597
2010-08-09 20:49:43 +00:00
Chris Lattner
249da5cb73
implement a simple instcombine xform that has been in the
...
readme forever.
llvm-svn: 94318
2010-01-23 18:49:30 +00:00
Chris Lattner
54f4e39956
optimize comparisons against cttz/ctlz/ctpop, patch by Alastair Lynn!
...
llvm-svn: 92745
2010-01-05 18:09:56 +00:00
Chris Lattner
9da1cb243b
optimize cttz and ctlz when we can prove something about the
...
leading/trailing bits. Patch by Alastair Lynn!
llvm-svn: 92706
2010-01-05 07:23:56 +00:00
Chris Lattner
8330daf733
add a few trivial instcombines for llvm.powi.
...
llvm-svn: 92383
2010-01-01 01:52:15 +00:00
Chris Lattner
1cc4cca193
add testcases for the foo_with_overflow op xforms added recently and
...
fix bugs exposed by the tests. Testcases from Alastair Lynn!
llvm-svn: 90056
2009-11-29 02:57:29 +00:00
Chris Lattner
39c07b2eef
if a 'with overflow' intrinsic just has the normal result used, simplify
...
it to a normal binop. Patch by Alastair Lynn, testcase by me.
llvm-svn: 86524
2009-11-09 07:07:56 +00:00