Commit Graph

291 Commits

Author SHA1 Message Date
Benjamin Kramer 7fa8c430f7 InstCombine: fold (A << C) == (B << C) --> ((A^B) & (~0U >> C)) == 0
Anding and comparing with zero can be done in a single instruction on
most archs so this is a bit cheaper.

llvm-svn: 233291
2015-03-26 17:12:06 +00:00
Benjamin Kramer 7857d723f1 [SimplifyLibCalls] Turn memchr(const, C, const) into a bitfield check.
strchr("123!", C) != nullptr is a common pattern to check if C is one
of 1, 2, 3 or !. If the largest element of the string is smaller than
the target's register size we can easily create a bitfield and just
do a simple test for set membership.

int foo(char C) { return strchr("123!", C) != nullptr; } now becomes

	cmpl	$64, %edi ## range check
	sbbb	%al, %al
	movabsq	$0xE000200000001, %rcx
	btq	%rdi, %rcx ## bit test
	sbbb	%cl, %cl
	andb	%al, %cl ## and the two conditions
	andb	$1, %cl
	movzbl	%cl, %eax ## returning an int
	ret

(imho the backend should expand this into a series of branches, but
that's a different story)

The code is currently limited to bit fields that fit in a register, so
usually 64 or 32 bits. Sadly, this misses anything using alpha chars
or {}. This could be fixed by just emitting a i128 bit field, but that
can generate really ugly code so we have to find a better way. To some
degree this is also recreating switch lowering logic, but we can't
simply emit a switch instruction and thus change the CFG within
instcombine.

llvm-svn: 232902
2015-03-21 21:09:33 +00:00
Rafael Espindola 11aaaeebe0 Delete -std-compile-opts.
These days -std-compile-opts was just a silly alias for -O3.

llvm-svn: 219951
2014-10-16 20:00:02 +00:00
Erik Verbruggen 2b98bd2a80 Reassociate x + -0.1234 * y into x - 0.1234 * y
This does not require -ffast-math, and it gives CSE/GVN more options to
eliminate duplicate expressions in, e.g.:

  return ((x + 0.1234 * y) * (x - 0.1234 * y));

Differential Revision: http://reviews.llvm.org/D4904

llvm-svn: 216169
2014-08-21 10:45:30 +00:00
Erik Verbruggen ccc517c47f Remove testcase from README which we didn't get. We do get it now.
llvm-svn: 215699
2014-08-15 10:33:03 +00:00
Erik Verbruggen 5e1bac3a38 Revert "InstCombine: merge constants in both operands of icmp."
This reverts commit r204912, and follow-up commit r204948.

This introduced a performance regression, and the fix is not completely
clear yet.

llvm-svn: 205010
2014-03-28 14:50:57 +00:00
Erik Verbruggen 59a1219846 InstCombine: merge constants in both operands of icmp.
Transform:
    icmp X+Cst2, Cst
into:
    icmp X, Cst-Cst2
when Cst-Cst2 does not overflow, and the add has nsw.

llvm-svn: 204912
2014-03-27 11:16:05 +00:00
Shuxin Yang 97e07bf211 Remove two popcount patterns which we are already able to recognize.
llvm-svn: 170158
2012-12-13 23:16:19 +00:00
Micah Villmow cdfe20b97f Move TargetData to DataLayout.
llvm-svn: 165402
2012-10-08 16:38:25 +00:00
Benjamin Kramer fd4fe7061c Fabs folding is implemented.
llvm-svn: 162186
2012-08-19 09:51:44 +00:00
Micah Villmow 9eedce1e7c Test revert of test changes.
llvm-svn: 160632
2012-07-23 16:42:45 +00:00
Micah Villmow 780c24f19c Test commit.
llvm-svn: 160631
2012-07-23 16:37:24 +00:00
Benjamin Kramer 53ffe55a66 Add a microoptimization note.
llvm-svn: 159082
2012-06-23 15:19:31 +00:00
Benjamin Kramer df4477c506 Remove README entry obsoleted by register masks.
llvm-svn: 154588
2012-04-12 12:47:29 +00:00
Benjamin Kramer 20b32d2da6 Add another note about a missed compare with nsw arithmetic instcombine.
llvm-svn: 153574
2012-03-28 10:50:18 +00:00
Benjamin Kramer 2735c01906 Add a note about a cute little fabs optimization.
llvm-svn: 153543
2012-03-27 22:42:42 +00:00
Benjamin Kramer f0901459b9 Add two missed instcombines related to compares with nsw arithmetic.
llvm-svn: 153542
2012-03-27 22:03:19 +00:00
Benjamin Kramer 2e63f6eac0 Add two notes for correlated-expression optimizations.
llvm-svn: 139263
2011-09-07 22:49:26 +00:00
Benjamin Kramer 15cd5a3f12 Don't emit a bit test if there is only one case the test can yield false. A simple SETNE is sufficient.
llvm-svn: 135126
2011-07-14 01:38:42 +00:00
Benjamin Kramer c970849ea0 InstCombine: Fold A-b == C --> b == A-C if A and C are constants.
The backend already knew this trick.

llvm-svn: 132915
2011-06-13 15:24:24 +00:00
Chris Lattner d71ed9431a clarify this, apparently it is confusing :)
llvm-svn: 131916
2011-05-23 20:17:44 +00:00
Chris Lattner bfe2c24c80 add a note.
llvm-svn: 131863
2011-05-22 18:28:46 +00:00
Chris Lattner bbc40ac474 move PR9408 here.
llvm-svn: 131841
2011-05-22 05:45:06 +00:00
Chris Lattner 1b06c71668 Transform: "icmp eq (trunc (lshr(X, cst1)), cst" to "icmp (and X, mask), cst"
when X has multiple uses.  This is useful for exposing secondary optimizations,
but the X86 backend isn't ready for this when X has a single use.  For example,
this can disable load folding.

This is inching towards resolving PR6627.

llvm-svn: 130238
2011-04-26 20:18:20 +00:00
Chris Lattner 6e29892430 add a missed bitfield instcombine.
llvm-svn: 130137
2011-04-25 18:44:26 +00:00
Benjamin Kramer 341c11da3b DAGCombine: fold "(zext x) == C" into "x == (trunc C)" if the trunc is lossless.
On x86 this allows to fold a load into the cmp, greatly reducing register pressure.
  movzbl	(%rdi), %eax
  cmpl	$47, %eax
->
  cmpb	$47, (%rdi)

This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :)

llvm-svn: 130005
2011-04-22 18:47:44 +00:00
Chris Lattner 1d313c6f6d add a minor missed dag combine that is blocking mid-level optimization
improvements, that will lead to fixing PR6627.

llvm-svn: 129504
2011-04-14 04:21:42 +00:00
Benjamin Kramer dc0082b087 Add a note.
llvm-svn: 128286
2011-03-25 17:32:40 +00:00
Eli Friedman 822e7bc061 A bit more analysis of a memset-related README entry.
llvm-svn: 128107
2011-03-22 20:49:53 +00:00
Eli Friedman 8e15a661bf This README entry was fixed recently.
llvm-svn: 127982
2011-03-21 01:33:03 +00:00
Chris Lattner 0c6cb46ac1 add a note
llvm-svn: 126719
2011-03-01 00:24:51 +00:00
Benjamin Kramer 26691d9660 Add some DAGCombines for (adde 0, 0, glue), which are useful to optimize legalized code for large integer arithmetic.
1. Inform users of ADDEs with two 0 operands that it never sets carry
2. Fold other ADDs or ADDCs into the ADDE if possible

It would be neat if we could do the same thing for SETCC+ADD eventually, but we can't do that in target independent code.

llvm-svn: 126557
2011-02-26 22:48:07 +00:00
Chris Lattner e9cba7bd34 add a missed loop deletion case.
llvm-svn: 126103
2011-02-21 02:13:39 +00:00
Chris Lattner 659c793a4e add an idiom that loop idiom could theoretically catch.
llvm-svn: 126101
2011-02-21 01:33:38 +00:00
Duncan Sands 491eb276a7 This has been implemented.
llvm-svn: 125738
2011-02-17 08:16:56 +00:00
Chris Lattner 727eebee58 add some notes on compares + binops. Remove redundant entries.
llvm-svn: 125702
2011-02-17 01:43:46 +00:00
Chris Lattner 28bf91f78e Add a few missed xforms from GCC PR14753
llvm-svn: 125681
2011-02-16 19:16:34 +00:00
Eli Friedman c8fb2557b9 Remove outdated README entry.
llvm-svn: 125660
2011-02-16 07:41:19 +00:00
Eli Friedman 0254c4c01b Remove outdated README entry.
llvm-svn: 125659
2011-02-16 07:18:18 +00:00
Eli Friedman 5f75515e5d Update README entry.
llvm-svn: 125658
2011-02-16 07:17:44 +00:00
Anders Carlsson 49d81a3d7e Remove a virtual inheritance case that clang can devirtualize fully now.
llvm-svn: 124989
2011-02-06 20:16:49 +00:00
Benjamin Kramer f4ea1d5f79 SimplifyCFG: Turn switches into sub+icmp+branch if possible.
This makes the job of the later optzn passes easier, allowing the vast amount of
icmp transforms to chew on it.

We transform 840 switches in gcc.c, leading to a 16k byte shrink of the resulting
binary on i386-linux.

The testcase from README.txt now compiles into
  decl  %edi
  cmpl  $3, %edi
  sbbl  %eax, %eax
  andl  $1, %eax
  ret

llvm-svn: 124724
2011-02-02 15:56:22 +00:00
Chris Lattner 865fe3b283 add a note, progress unblocked by PR8575 being fixed.
llvm-svn: 124599
2011-01-31 20:23:28 +00:00
Benjamin Kramer 946e1522b6 Teach DAGCombine to fold fold (sra (trunc (sr x, c1)), c2) -> (trunc (sra x, c1+c2) when c1 equals the amount of bits that are truncated off.
This happens all the time when a smul is promoted to a larger type.

On x86-64 we now compile "int test(int x) { return x/10; }" into
  movslq  %edi, %rax
  imulq $1717986919, %rax, %rax
  movq  %rax, %rcx
  shrq  $63, %rcx
  sarq  $34, %rax <- used to be "shrq $32, %rax; sarl $2, %eax"
  addl  %ecx, %eax

This fires 96 times in gcc.c on x86-64.

llvm-svn: 124559
2011-01-30 16:38:43 +00:00
Chris Lattner 9685603260 this isn't a memset, we do convert dest[i] to one though :)
llvm-svn: 124097
2011-01-24 02:32:00 +00:00
Chris Lattner b830ee5250 with recent work, we now optimize this into:
define i32 @foo(i32 %x) nounwind readnone ssp {
entry:
  %tobool = icmp eq i32 %x, 0
  %tmp5 = select i1 %tobool, i32 2, i32 1
  ret i32 %tmp5
}

llvm-svn: 124091
2011-01-24 01:12:18 +00:00
Anders Carlsson 773bc67eff Add a memset loop that LoopIdiomRecognize doesn't recognize.
llvm-svn: 124082
2011-01-23 20:31:00 +00:00
Chris Lattner a56c8279e8 add a note
llvm-svn: 123752
2011-01-18 07:47:48 +00:00
Anders Carlsson 6a5171ba68 Update README.txt to remove the DAE enhancement.
llvm-svn: 123597
2011-01-16 21:26:15 +00:00
Chris Lattner c326ebd118 add some commentary
llvm-svn: 123572
2011-01-16 06:39:44 +00:00