Commit Graph

16 Commits

Author SHA1 Message Date
Peter Collingbourne 10c500ddc0 opt: Rename -default-data-layout flag to -data-layout and make it always override the layout.
There isn't much point in a flag that only works if the data layout is empty.

Differential Revision: https://reviews.llvm.org/D30014

llvm-svn: 295468
2017-02-17 17:36:52 +00:00
Matt Arsenault 95365ca482 Fix assert when inlining a constantexpr addrspacecast
The pointer size of the addrspacecasted pointer might not have matched,
so this would have hit an assert in accumulateConstantOffset.

I think this was here to allow constant folding of a load of an
addrspacecasted constant. Accumulating the offset through the
addrspacecast doesn't make much sense, so something else is necessary
to allow folding the load through this cast.

llvm-svn: 243300
2015-07-27 18:31:03 +00:00
David Blaikie f72d05bc7b [opaque pointer type] Add textual IR support for explicit type parameter to gep operator
Similar to gep (r230786) and load (r230794) changes.

Similar migration script can be used to update test cases, which
successfully migrated all of LLVM and Polly, but about 4 test cases
needed manually changes in Clang.

(this script will read the contents of stdin and massage it into stdout
- wrap it in the 'apply.sh' script shown in previous commits + xargs to
apply it over a large set of test cases)

import fileinput
import sys
import re

rep = re.compile(r"(getelementptr(?:\s+inbounds)?\s*\()((<\d*\s+x\s+)?([^@]*?)(|\s*addrspace\(\d+\))\s*\*(?(3)>)\s*)(?=$|%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|zeroinitializer|<|\[\[[a-zA-Z]|\{\{)", re.MULTILINE | re.DOTALL)

def conv(match):
  line = match.group(1)
  line += match.group(4)
  line += ", "
  line += match.group(2)
  return line

line = sys.stdin.read()
off = 0
for match in re.finditer(rep, line):
  sys.stdout.write(line[off:match.start()])
  sys.stdout.write(conv(match))
  off = match.end()
sys.stdout.write(line[off:])

llvm-svn: 232184
2015-03-13 18:20:45 +00:00
David Blaikie a79ac14fa6 [opaque pointer type] Add textual IR support for explicit type parameter to load instruction
Essentially the same as the GEP change in r230786.

A similar migration script can be used to update test cases, though a few more
test case improvements/changes were required this time around: (r229269-r229278)

import fileinput
import sys
import re

pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)")

for line in sys.stdin:
  sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line))

Reviewers: rafael, dexonsmith, grosser

Differential Revision: http://reviews.llvm.org/D7649

llvm-svn: 230794
2015-02-27 21:17:42 +00:00
Matt Arsenault caa9c71f32 Look through addrspacecast in IsConstantOffsetFromGlobal
llvm-svn: 213000
2014-07-14 22:39:26 +00:00
Chandler Carruth a0e5695ad9 Teach the constant folder to look through bitcast constant expressions
much more effectively when trying to constant fold a load of a constant.
Previously, we only handled bitcasts by trying to find a totally generic
byte representation of the constant and use that. Now, we look through
the bitcast to see what constant we might fold the load into, and then
try to form a constant expression cast of the found value that would be
equivalent to loading the value.

You might wonder why on earth this actually matters. Well, turns out
that the Itanium ABI causes us to create a single array for a vtable
where the first elements are virtual base offsets, followed by the
virtual function pointers. Because the array is homogenous the element
type is consistently i8* and we inttoptr the virtual base offsets into
the initial elements.

Then constructors bitcast these pointers to i64 pointers prior to
loading them. Boom, no more constant folding of virtual base offsets.
This is the first fix to LLVM to address the *insane* performance Eric
Niebler discovered with Clang on his range comprehensions[1]. There is
more to come though, this doesn't *really* fix the problem fully.

[1]: http://ericniebler.com/2014/04/27/range-comprehensions/

llvm-svn: 208856
2014-05-15 09:56:28 +00:00
Matt Arsenault 7a960a8455 Teach ConstantFolding about pointer address spaces
llvm-svn: 188831
2013-08-20 21:20:04 +00:00
Stephen Lin c1c7a1309c Update Transforms tests to use CHECK-LABEL for easier debugging. No functionality change.
This update was done with the following bash script:

  find test/Transforms -name "*.ll" | \
  while read NAME; do
    echo "$NAME"
    if ! grep -q "^; *RUN: *llc" $NAME; then
      TEMP=`mktemp -t temp`
      cp $NAME $TEMP
      sed -n "s/^define [^@]*@\([A-Za-z0-9_]*\)(.*$/\1/p" < $NAME | \
      while read FUNC; do
        sed -i '' "s/;\(.*\)\([A-Za-z0-9_]*\):\( *\)@$FUNC\([( ]*\)\$/;\1\2-LABEL:\3@$FUNC(/g" $TEMP
      done
      mv $TEMP $NAME
    fi
  done

llvm-svn: 186268
2013-07-14 01:42:54 +00:00
NAKAMURA Takumi 43ab4ef9ba llvm/ConstantFolding.cpp: Make ReadDataFromGlobal() and FoldReinterpretLoadFromConstPtr() Big-endian-aware.
llvm-svn: 167595
2012-11-08 20:34:25 +00:00
Anders Carlsson d21b06a0db When loading from a constant, fold inttoptr if the integer type and the resulting pointer type both have the same size.
llvm-svn: 124987
2011-02-06 20:11:56 +00:00
Chris Lattner a69f89c17a fix PR5978 by peeling the loop so that we avoid shifting the
result int by 8 for the first byte.  While normally harmless,
if the result is smaller than a byte, this shift is invalid.

llvm-svn: 93018
2010-01-08 19:02:23 +00:00
Chris Lattner 1ddfd9f96c Fix PR5551 by not ignoring the top level constantexpr when
folding a load from constant.

llvm-svn: 90545
2009-12-04 06:29:29 +00:00
Chris Lattner 9e2d5b3b8e fix PR5287, a serious regression from my previous patches. Thanks to
Duncan for the nice tiny testcase.

llvm-svn: 84992
2009-10-24 05:22:15 +00:00
Chris Lattner ccf1e84779 teach libanalysis to simplify vector loads with bitcast sources. This
implements something out of Target/README.txt producing:

_foo:                                                       ## @foo
	movl	4(%esp), %eax
	movapd	LCPI1_0, %xmm0
	movapd	%xmm0, (%eax)
	ret	$4

instead of:

_foo:                                                       ## @foo
	movl	4(%esp), %eax
	movapd	_b, %xmm0
	mulpd	LCPI1_0, %xmm0
	addpd	_a, %xmm0
	movapd	%xmm0, (%eax)
	ret	$4

llvm-svn: 84942
2009-10-23 06:57:37 +00:00
Chris Lattner 59f94c01dd enhance FoldReinterpretLoadFromConstPtr to handle loads of up to 32
bytes (i256).

llvm-svn: 84941
2009-10-23 06:50:36 +00:00
Chris Lattner ed00b80bf8 teach libanalysis to fold int and fp loads from almost arbitrary
non-type-safe constant initializers.  This sort of thing happens
quite a bit for 4-byte loads out of string constants, unions, 
bitfields, and an interesting endianness check from sqlite, which
is something like this:

const int sqlite3one = 1;
# define SQLITE_BIGENDIAN    (*(char *)(&sqlite3one)==0)
# define SQLITE_LITTLEENDIAN (*(char *)(&sqlite3one)==1)
# define SQLITE_UTF16NATIVE (SQLITE_BIGENDIAN?SQLITE_UTF16BE:SQLITE_UTF16LE)

all of these macros now constant fold away.

This implements PR3152 and is based on a patch started by Eli, but heavily
modified and extended.

llvm-svn: 84936
2009-10-23 06:23:49 +00:00