Commit Graph

115 Commits

Author SHA1 Message Date
Peter Collingbourne 10c500ddc0 opt: Rename -default-data-layout flag to -data-layout and make it always override the layout.
There isn't much point in a flag that only works if the data layout is empty.

Differential Revision: https://reviews.llvm.org/D30014

llvm-svn: 295468
2017-02-17 17:36:52 +00:00
Bryant Wong b5e03b61e2 [InstCombiner] Simplify lib calls to `round{,f}`
Differential Revision: https://reviews.llvm.org/D28110

llvm-svn: 290542
2016-12-26 14:29:29 +00:00
David Majnemer 522a91181a Don't remove side effecting instructions due to ConstantFoldInstruction
Just because we can constant fold the result of an instruction does not
imply that we can delete the instruction.  It may have side effects.

This fixes PR28655.

llvm-svn: 276389
2016-07-22 04:54:44 +00:00
Simon Pilgrim 0ea8d275cc [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

A companion clang patch is at D22105

Differential Revision: https://reviews.llvm.org/D22106

llvm-svn: 275981
2016-07-19 15:07:43 +00:00
Simon Pilgrim 4f1877fb57 [X86][SSE] Improve constant folding tests for CVTSD/CVTSS/CVTTSD/CVTTSS
As discussed on D22106, improve the testing for constant folding sse scalar conversion intrinsics to ensure we are correctly handling special/out of range cases

llvm-svn: 274846
2016-07-08 13:28:34 +00:00
Davide Italiano cd7c84bd8b Revert "[SCCP] Partially propagate informations when the input is not fully defined."
This reverts commit r269105 as it caused PR27712.

llvm-svn: 269252
2016-05-11 23:06:10 +00:00
Davide Italiano 7860c9bbf4 [SCCP] Partially propagate informations when the input is not fully defined.
With this patch:
%r1 = lshr i64 -1, 4294967296 -> undef

Before this patch:
%r1 = lshr i64 -1, 4294967296 -> 0

llvm-svn: 269105
2016-05-10 19:49:47 +00:00
Justin Bogner b7389d6714 IR: Make ConstantDataArray::getFP actually return a ConstantDataArray
The ConstantDataArray::getFP(LLVMContext &, ArrayRef<uint16_t>)
overload has had a typo in it since it was written, where it will
create a Vector instead of an Array. This obviously doesn't work at
all, but it turns out that until r254991 there weren't actually any
callers of this overload. Fix the typo and add some test coverage.

llvm-svn: 255157
2015-12-09 21:21:07 +00:00
Vedant Kumar 3171cabb34 [test] (NFC) Simplify Transforms/ConstProp/calls.ll
Differential Revision: http://reviews.llvm.org/D12421

llvm-svn: 246312
2015-08-28 18:04:20 +00:00
Erik Schnetter 5e93e28d8b Enable constant propagation for more math functions
Constant propagation for single precision math functions (such as
tanf) is already working, but was not enabled. This patch enables
these for many single-precision functions, and adds respective test
cases.

Newly handled functions: acosf asinf atanf atan2f ceilf coshf expf
exp2f fabsf floorf fmodf logf log10f powf sinhf tanf tanhf

llvm-svn: 246194
2015-08-27 19:56:57 +00:00
Erik Schnetter ed6eab32b3 Revert 246186; still breaks on some systems
llvm-svn: 246191
2015-08-27 19:34:14 +00:00
Erik Schnetter 05845d31c9 Enable constant propagation for more math functions
Constant propagation for single precision math functions (such as
tanf) is already working, but was not enabled. This patch enables
these for many single-precision functions, and adds respective test
cases.

Newly handled functions: acosf asinf atanf atan2f ceilf coshf expf
exp2f fabsf floorf fmodf logf log10f powf sinhf tanf tanhf

llvm-svn: 246186
2015-08-27 18:56:23 +00:00
Erik Schnetter a23672626d Revert r246158 since it breaks LLVM.Transforms/ConstProp.calls.ll
llvm-svn: 246166
2015-08-27 17:24:01 +00:00
Erik Schnetter 694bf5c9b5 Enable constant propagation for more math functions
Constant propagation for single precision math functions (such as
tanf) is already working, but was not enabled. This patch enables
these for many single-precision functions, and adds respective test
cases.

Newly handled functions: acosf asinf atanf atan2f ceilf coshf expf
exp2f fabsf floorf fmodf logf log10f powf sinhf tanf tanhf

llvm-svn: 246158
2015-08-27 16:36:37 +00:00
Matt Arsenault 95365ca482 Fix assert when inlining a constantexpr addrspacecast
The pointer size of the addrspacecasted pointer might not have matched,
so this would have hit an assert in accumulateConstantOffset.

I think this was here to allow constant folding of a load of an
addrspacecasted constant. Accumulating the offset through the
addrspacecast doesn't make much sense, so something else is necessary
to allow folding the load through this cast.

llvm-svn: 243300
2015-07-27 18:31:03 +00:00
Andrea Di Biagio 2905999c22 [ConstantFolding] Fix wrong folding of intrinsic 'convert.from.fp16'.
Function 'ConstantFoldScalarCall' (in ConstantFolding.cpp) works under the
wrong assumption that a call to 'convert.from.fp16' returns a value of
type 'float'.
However, intrinsic 'convert.from.fp16' can be overloaded; for example, we
can call 'convert.from.fp16.f64' to convert from half to double; etc.

Before this patch, the following example would have triggered an assertion
failure in opt (with -constprop):

```
define double @foo() {
entry:
  %0 = call double @llvm.convert.from.fp16.f64(i16 0)
  ret double %0
}
```

This patch fixes the problem in ConstantFolding.cpp. When folding a call to
convert.from.fp16, we perform a different kind of conversion based on the call
return type.

Added test 'Transform/ConstProp/convert-from-fp16.ll'.

Differential Revision: http://reviews.llvm.org/D9771

llvm-svn: 237377
2015-05-14 18:01:48 +00:00
Pawel Bylica c25918a043 Constfold insertelement to undef when index is out-of-bounds
Summary:
This patch adds constant folding of insertelement instruction to undef value when index operand is constant and is not less than vector size or is undef.

InstCombine does not support this case, but I'm happy to add it there also if this change is accepted.

Test Plan: Unittests and regression tests for ConstProp pass.

Reviewers: majnemer

Reviewed By: majnemer

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9287

llvm-svn: 235854
2015-04-27 09:30:49 +00:00
David Blaikie f72d05bc7b [opaque pointer type] Add textual IR support for explicit type parameter to gep operator
Similar to gep (r230786) and load (r230794) changes.

Similar migration script can be used to update test cases, which
successfully migrated all of LLVM and Polly, but about 4 test cases
needed manually changes in Clang.

(this script will read the contents of stdin and massage it into stdout
- wrap it in the 'apply.sh' script shown in previous commits + xargs to
apply it over a large set of test cases)

import fileinput
import sys
import re

rep = re.compile(r"(getelementptr(?:\s+inbounds)?\s*\()((<\d*\s+x\s+)?([^@]*?)(|\s*addrspace\(\d+\))\s*\*(?(3)>)\s*)(?=$|%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|zeroinitializer|<|\[\[[a-zA-Z]|\{\{)", re.MULTILINE | re.DOTALL)

def conv(match):
  line = match.group(1)
  line += match.group(4)
  line += ", "
  line += match.group(2)
  return line

line = sys.stdin.read()
off = 0
for match in re.finditer(rep, line):
  sys.stdout.write(line[off:match.start()])
  sys.stdout.write(conv(match))
  off = match.end()
sys.stdout.write(line[off:])

llvm-svn: 232184
2015-03-13 18:20:45 +00:00
David Majnemer e2a4b856d8 ConstantFold: Fix big shift constant folding
Constant folding for shift IR instructions ignores all bits above 32 of
second argument (shift amount).
Because of that, some undef results are not recognized and APInt can
raise an assert failure if second argument has more than 64 bits.

Patch by Paweł Bylica!

Differential Revision: http://reviews.llvm.org/D7701

llvm-svn: 232176
2015-03-13 16:39:46 +00:00
David Blaikie a79ac14fa6 [opaque pointer type] Add textual IR support for explicit type parameter to load instruction
Essentially the same as the GEP change in r230786.

A similar migration script can be used to update test cases, though a few more
test case improvements/changes were required this time around: (r229269-r229278)

import fileinput
import sys
import re

pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)")

for line in sys.stdin:
  sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line))

Reviewers: rafael, dexonsmith, grosser

Differential Revision: http://reviews.llvm.org/D7649

llvm-svn: 230794
2015-02-27 21:17:42 +00:00
David Blaikie 79e6c74981 [opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.

This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.

* This doesn't modify gep operators, only instructions (operators will be
  handled separately)

* Textual IR changes only. Bitcode (including upgrade) and changing the
  in-memory representation will be in separate changes.

* geps of vectors are transformed as:
    getelementptr <4 x float*> %x, ...
  ->getelementptr float, <4 x float*> %x, ...
  Then, once the opaque pointer type is introduced, this will ultimately look
  like:
    getelementptr float, <4 x ptr> %x
  with the unambiguous interpretation that it is a vector of pointers to float.

* address spaces remain on the pointer, not the type:
    getelementptr float addrspace(1)* %x
  ->getelementptr float, float addrspace(1)* %x
  Then, eventually:
    getelementptr float, ptr addrspace(1) %x

Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.

update.py:
import fileinput
import sys
import re

ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile(       r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")

def conv(match, line):
  if not match:
    return line
  line = match.groups()[0]
  if len(match.groups()[5]) == 0:
    line += match.groups()[2]
  line += match.groups()[3]
  line += ", "
  line += match.groups()[1]
  line += "\n"
  return line

for line in sys.stdin:
  if line.find("getelementptr ") == line.find("getelementptr inbounds"):
    if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
      line = conv(re.match(ibrep, line), line)
  elif line.find("getelementptr ") != line.find("getelementptr ("):
    line = conv(re.match(normrep, line), line)
  sys.stdout.write(line)

apply.sh:
for name in "$@"
do
  python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
  rm -f "$name.tmp"
done

The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh

After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).

The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.

Reviewers: rafael, dexonsmith, grosser

Differential Revision: http://reviews.llvm.org/D7636

llvm-svn: 230786
2015-02-27 19:29:02 +00:00
Rafael Espindola 8c97e19124 Avoid conversion to float when creating ConstantDataArray/ConstantDataVector.
Patch by Raoux, Thomas F!

llvm-svn: 229864
2015-02-19 16:08:20 +00:00
Jiangning Liu 950844fadb Fix a bug around truncating vector in const prop.
In constant folding stage, "TRUNC" can't handle vector data type.

llvm-svn: 216149
2014-08-21 02:12:35 +00:00
Matt Arsenault caa9c71f32 Look through addrspacecast in IsConstantOffsetFromGlobal
llvm-svn: 213000
2014-07-14 22:39:26 +00:00
Chandler Carruth a0e5695ad9 Teach the constant folder to look through bitcast constant expressions
much more effectively when trying to constant fold a load of a constant.
Previously, we only handled bitcasts by trying to find a totally generic
byte representation of the constant and use that. Now, we look through
the bitcast to see what constant we might fold the load into, and then
try to form a constant expression cast of the found value that would be
equivalent to loading the value.

You might wonder why on earth this actually matters. Well, turns out
that the Itanium ABI causes us to create a single array for a vtable
where the first elements are virtual base offsets, followed by the
virtual function pointers. Because the array is homogenous the element
type is consistently i8* and we inttoptr the virtual base offsets into
the initial elements.

Then constructors bitcast these pointers to i64 pointers prior to
loading them. Boom, no more constant folding of virtual base offsets.
This is the first fix to LLVM to address the *insane* performance Eric
Niebler discovered with Clang on his range comprehensions[1]. There is
more to come though, this doesn't *really* fix the problem fully.

[1]: http://ericniebler.com/2014/04/27/range-comprehensions/

llvm-svn: 208856
2014-05-15 09:56:28 +00:00
Matt Arsenault 7a960a8455 Teach ConstantFolding about pointer address spaces
llvm-svn: 188831
2013-08-20 21:20:04 +00:00
Daniel Dunbar 9efbedfd35 [tests] Cleanup initialization of test suffixes.
- Instead of setting the suffixes in a bunch of places, just set one master
   list in the top-level config. We now only modify the suffix list in a few
   suites that have one particular unique suffix (.ml, .mc, .yaml, .td, .py).

 - Aside from removing the need for a bunch of lit.local.cfg files, this enables
   4 tests that were inadvertently being skipped (one in
   Transforms/BranchFolding, a .s file each in DebugInfo/AArch64 and
   CodeGen/PowerPC, and one in CodeGen/SI which is now failing and has been
   XFAILED).

 - This commit also fixes a bunch of config files to use config.root instead of
   older copy-pasted code.

llvm-svn: 188513
2013-08-16 00:37:11 +00:00
Stephen Lin a76289aa1b Catch more CHECK that can be converted to CHECK-LABEL in Transforms for easier debugging. No functionality change.
This conversion was done with the following bash script:

  find test/Transforms -name "*.ll" | \
  while read NAME; do
    echo "$NAME"
    if ! grep -q "^; *RUN: *llc" $NAME; then
      TEMP=`mktemp -t temp`
      cp $NAME $TEMP
      sed -n "s/^define [^@]*@\([A-Za-z0-9_]*\)(.*$/\1/p" < $NAME | \
      while read FUNC; do
        sed -i '' "s/;\(.*\)\([A-Za-z0-9_]*\):\( *\)define\([^@]*\)@$FUNC\([( ]*\)\$/;\1\2-LABEL:\3define\4@$FUNC(/g" $TEMP
      done
      mv $TEMP $NAME
    fi
  done

llvm-svn: 186269
2013-07-14 01:50:49 +00:00
Stephen Lin c1c7a1309c Update Transforms tests to use CHECK-LABEL for easier debugging. No functionality change.
This update was done with the following bash script:

  find test/Transforms -name "*.ll" | \
  while read NAME; do
    echo "$NAME"
    if ! grep -q "^; *RUN: *llc" $NAME; then
      TEMP=`mktemp -t temp`
      cp $NAME $TEMP
      sed -n "s/^define [^@]*@\([A-Za-z0-9_]*\)(.*$/\1/p" < $NAME | \
      while read FUNC; do
        sed -i '' "s/;\(.*\)\([A-Za-z0-9_]*\):\( *\)@$FUNC\([( ]*\)\$/;\1\2-LABEL:\3@$FUNC(/g" $TEMP
      done
      mv $TEMP $NAME
    fi
  done

llvm-svn: 186268
2013-07-14 01:42:54 +00:00
Stephen Lin 6dd347b39f Add newlines at end of test files, no functionality change
llvm-svn: 186263
2013-07-13 22:00:58 +00:00
Owen Anderson bfd2ce96c7 Remove this testcase until I can figure out how to properly conditionalize it.
llvm-svn: 174591
2013-02-07 07:01:54 +00:00
Owen Anderson 589baf98b4 Another attempt at getting the XFAIL line right for this test.
llvm-svn: 174588
2013-02-07 06:26:55 +00:00
Owen Anderson 389f7dc7a2 Fix CMake detection of various cmath functions, and XFAIL the test on platforms that are known to be missing them.
llvm-svn: 174564
2013-02-07 00:54:05 +00:00
Owen Anderson d4ebfd8400 Signficantly generalize our ability to constant fold floating point intrinsics, including ones on half types.
llvm-svn: 174555
2013-02-06 22:43:31 +00:00
Benjamin Kramer 435eba09b7 ConstantFolding: Add a missing folding that leads to a miscompile.
We use constant folding to see if an intrinsic evaluates to the same value as a
constant that we know. If we don't take the undefinedness into account we get a
value that doesn't match the actual implementation, and miscompiled code.

This was uncovered by Chandler's simplifycfg changes.

llvm-svn: 173356
2013-01-24 16:28:28 +00:00
NAKAMURA Takumi 43ab4ef9ba llvm/ConstantFolding.cpp: Make ReadDataFromGlobal() and FoldReinterpretLoadFromConstPtr() Big-endian-aware.
llvm-svn: 167595
2012-11-08 20:34:25 +00:00
Chandler Carruth ff123d5c63 Fix the remaining TCL-style quotes found in the testsuite. This is
another mechanical change accomplished though the power of terrible Perl
scripts.

I have manually switched some "s to 's to make escaping simpler.

While I started this to fix tests that aren't run in all configurations,
the massive number of tests is due to a really frustrating fragility of
our testing infrastructure: things like 'grep -v', 'not grep', and
'expected failures' can mask broken tests all too easily.

Essentially, I'm deeply disturbed that I can change the testsuite so
radically without causing any change in results for most platforms. =/

llvm-svn: 159547
2012-07-02 19:09:46 +00:00
Chandler Carruth a5a29f970e Convert all tests using TCL-style quoting to use shell-style quoting.
This was done through the aid of a terrible Perl creation. I will not
paste any of the horrors here. Suffice to say, it require multiple
staged rounds of replacements, state carried between, and a few
nested-construct-parsing hacks that I'm not proud of. It happens, by
luck, to be able to deal with all the TCL-quoting patterns in evidence
in the LLVM test suite.

If anyone is maintaining large out-of-tree test trees, feel free to poke
me and I'll send you the steps I used to convert things, as well as
answer any painful questions etc. IRC works best for this type of thing
I find.

Once converted, switch the LLVM lit config to use ShTests the same as
Clang. In addition to being able to delete large amounts of Python code
from 'lit', this will also simplify the entire test suite and some of
lit's architecture.

Finally, the test suite runs 33% faster on Linux now. ;]
For my 16-hardware-thread (2x 4-core xeon e5520): 36s -> 24s

llvm-svn: 159525
2012-07-02 12:47:22 +00:00
Eli Bendersky 924f9a671d Replace all instances of dg.exp file with lit.local.cfg, since all tests are run with LIT now and now Dejagnu. dg.exp is no longer needed.
Patch reviewed by Daniel Dunbar. It will be followed by additional cleanup patches.

llvm-svn: 150664
2012-02-16 06:28:33 +00:00
Rafael Espindola bb893fea6b Add r149110 back with a fix for when the vector and the int have the same
width.

llvm-svn: 149151
2012-01-27 23:33:07 +00:00
Rafael Espindola a4062624d1 Revert r149110 and add a testcase that was crashing since that revision.
Unfortunately I also had to disable constant-pool-sharing.ll the code it tests has been
updated to use the IL logic.

llvm-svn: 149148
2012-01-27 22:42:48 +00:00
Chris Lattner 111d6ee655 enhance constant folding to be able to constant fold bitcast of
ConstantVector's to integer type.

llvm-svn: 149110
2012-01-27 01:44:03 +00:00
Chandler Carruth 6b0e34c445 Manually upgrade the test suite to specify the flag to cttz and ctlz.
I followed three heuristics for deciding whether to set 'true' or
'false':

- Everything target independent got 'true' as that is the expected
  common output of the GCC builtins.
- If the target arch only has one way of implementing this operation,
  set the flag in the way that exercises the most of codegen. For most
  architectures this is also the likely path from a GCC builtin, with
  'true' being set. It will (eventually) require lowering away that
  difference, and then lowering to the architecture's operation.
- Otherwise, set the flag differently dependending on which target
  operation should be tested.

Let me know if anyone has any issue with this pattern or would like
specific tests of another form. This should allow the x86 codegen to
just iteratively improve as I teach the backend how to differentiate
between the two forms, and everything else should remain exactly the
same.

llvm-svn: 146370
2011-12-12 11:59:10 +00:00
Chad Rosier 0155a63513 Add support for constant folding the pow intrinsic.
rdar://10514247

llvm-svn: 145730
2011-12-03 00:00:03 +00:00
Chad Rosier 3367123b12 Prevent library calls from being folded if -fno-builtin has been specified.
rdar://10500969

llvm-svn: 145639
2011-12-01 22:14:50 +00:00
Richard Smith 4f9a8081c3 Correctly byte-swap APInts with bit-widths greater than 64.
llvm-svn: 145111
2011-11-23 21:33:37 +00:00
Chris Lattner b1ed91f397 Land the long talked about "type system rewrite" patch. This
patch brings numerous advantages to LLVM.  One way to look at it
is through diffstat:
 109 files changed, 3005 insertions(+), 5906 deletions(-)

Removing almost 3K lines of code is a good thing.  Other advantages
include:

1. Value::getType() is a simple load that can be CSE'd, not a mutating
   union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
   uniques them.  This means that the compiler doesn't merge them structurally
   which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
   struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
   in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead 
   "const Type *" everywhere.

Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.  
"LLVM 3.0" is the right time to do this.

There are still some cleanups pending after this, this patch is large enough
as-is.

llvm-svn: 134829
2011-07-09 17:41:24 +00:00
Chris Lattner 713d52364f implement PR9315, constant folding exp2 in terms of pow (since hosts without
C99 runtimes don't have exp2).

llvm-svn: 131872
2011-05-22 22:22:35 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Frits van Bommel 0bb2ad2cf7 Constant folding support for calls to umul.with.overflow(), basically identical to the smul.with.overflow() code.
llvm-svn: 128379
2011-03-27 14:26:13 +00:00