Commit Graph

827 Commits

Author SHA1 Message Date
Duncan Sands 44b8721de8 Executive summary: getTypeSize -> getTypeStoreSize / getABITypeSize.
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment.  This gives a primitive type for
which getTypeSize differed from getABITypeSize.  For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).

This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition).  Instead there is:

(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type.  For a primitive type, this is the minimum number
of bits.  For an i36 this is 36 bits.  For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.

(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it).  For an
i36 this is 40 bits, for an x86 long double it is 80 bits.  This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes).  There doesn't seem to be anything
corresponding to this in gcc.

(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment.  For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS.  This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes).  This is
TYPE_SIZE in gcc.

Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize.  This means that the size of an array
is the length times the getABITypeSize.  It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize.  Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case.  So alloca's and mallocs should use getABITypeSize.  Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.

Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.

In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases).  I will get around to auditing these too at some point,
but I could do with some help.

Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize.  I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers.  If someone wants to pack these types more
tightly they can always use a packed struct.

llvm-svn: 43620
2007-11-01 20:53:16 +00:00
Chris Lattner 74709473ed Fix InstCombine/2007-10-31-RangeCrash.ll
llvm-svn: 43596
2007-11-01 02:18:41 +00:00
Chris Lattner 55b8302dfe simplify some code by using the new isNaN predicate
llvm-svn: 43305
2007-10-24 18:54:45 +00:00
Chris Lattner c62877e9da Implement a couple of foldings for ordered and unordered comparisons,
implementing cases related to PR1738.

llvm-svn: 43289
2007-10-24 05:38:08 +00:00
Devang Patel df49cf52e2 Try again.
Instead of loading small global string from memory, use
integer constant.

llvm-svn: 43148
2007-10-18 19:52:32 +00:00
Evan Cheng cdcc1d0444 Reverting r43070 for now. It's causing llc test failures.
llvm-svn: 43103
2007-10-17 23:51:13 +00:00
Devang Patel 91ff13edcc Apply "Instead of loading small c string constant, use integer constant directly" transformation while processing load instruction.
llvm-svn: 43070
2007-10-17 07:24:40 +00:00
Devang Patel 8d818f5e80 Use immediate stores.
llvm-svn: 43055
2007-10-16 23:44:18 +00:00
Devang Patel bff4aea328 Achieve same result but use fewer lines of code.
llvm-svn: 42985
2007-10-15 15:31:35 +00:00
Devang Patel 371e6ca690 Dest type is always i8 *. This allows some simplification.
Do not filter memmove.

llvm-svn: 42930
2007-10-12 20:10:21 +00:00
Chris Lattner ad618f66e6 Fix a bug in my patch last night that broke InstCombine/2007-10-12-Crash.ll
llvm-svn: 42920
2007-10-12 18:05:47 +00:00
Gabor Greif 5d8f7e0cc7 eliminate warning
llvm-svn: 42892
2007-10-12 07:44:54 +00:00
Chris Lattner d8675e4915 Fix some 80 column violations.
Fix DecomposeSimpleLinearExpr to handle simple constants better.
Don't nuke gep(bitcast(allocation)) if the bitcast(allocation) will
fold the allocation.  This fixes PR1728 and Instcombine/malloc3.ll

llvm-svn: 42891
2007-10-12 05:30:59 +00:00
Devang Patel 899cc56612 Lower memcpy if it makes sense.
llvm-svn: 42864
2007-10-11 17:21:57 +00:00
Dale Johannesen 9d559cfff5 Tone down an overzealous optimization.
llvm-svn: 42582
2007-10-03 17:45:27 +00:00
Duncan Sands d31649bc59 Improve comment.
llvm-svn: 42132
2007-09-19 10:25:38 +00:00
Duncan Sands 56df7dec2b A global variable with external weak linkage can be null, while
an alias could alias such a global variable.

llvm-svn: 42130
2007-09-19 10:10:31 +00:00
Dan Gohman 2ac2652779 Instcombine x-((x/y)*y) into a remainder operator.
llvm-svn: 42035
2007-09-17 17:31:57 +00:00
Duncan Sands 6d5da71288 Factor the trampoline transformation into a subroutine.
llvm-svn: 42021
2007-09-17 10:26:40 +00:00
Dale Johannesen 98d3a08d8f Remove the assumption that FP's are either float or
double from some of the many places in the optimizers
it appears, and do something reasonable with x86
long double.
Make APInt::dump() public, remove newline, use it to
dump ConstantSDNode's.
Allow APFloats in FoldingSet.
Expand X86 backend handling of long doubles (conversions
to/from int, mostly).

llvm-svn: 41967
2007-09-14 22:26:36 +00:00
Chris Lattner d9111b88d1 silence a bogus gcc warning.
llvm-svn: 41949
2007-09-14 03:07:24 +00:00
Duncan Sands 9204663bcb Turn calls to trampolines into calls to the underlying
nested function.

llvm-svn: 41844
2007-09-11 14:35:41 +00:00
Chris Lattner e804567cd8 remove some dead code, this is handled by constant folding.
llvm-svn: 41819
2007-09-10 23:46:29 +00:00
Chris Lattner 85a51e0060 Don't zap back to back volatile load/stores
llvm-svn: 41759
2007-09-07 05:33:03 +00:00
Dale Johannesen bed9dc423c Next round of APFloat changes.
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double.  Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)

llvm-svn: 41747
2007-09-06 18:13:44 +00:00
Nick Lewycky 0c5c47944a Use isTrueWhenEqual. Thanks Chris!
llvm-svn: 41741
2007-09-06 02:40:25 +00:00
Nick Lewycky b0b066eaaa When the two operands of an icmp are equal, there are five possible predicates
that would make the icmp true. Fixes PR1637.

llvm-svn: 41740
2007-09-06 01:10:22 +00:00
Chuck Rose III 2320323647 Forgot to obey 80 column rule. Fixing that.
llvm-svn: 41725
2007-09-05 20:36:41 +00:00
Chuck Rose III e58572233d Added default parameters to GetElementPtrInstr constructor call. Visual Studio 2k5 was getting confused and was unable to compile it. Suspected compiler error.
llvm-svn: 41721
2007-09-05 16:54:38 +00:00
David Greene c656cbb8c2 Update GEP constructors to use an iterator interface to fix
GLIBCXX_DEBUG issues.

llvm-svn: 41697
2007-09-04 15:46:09 +00:00
Chris Lattner 0e258b8518 Cut off crazy computation. This helps PR1622 slightly.
llvm-svn: 41522
2007-08-28 04:23:55 +00:00
David Greene 703623d571 Update InvokeInst to work like CallInst
llvm-svn: 41506
2007-08-27 19:04:21 +00:00
Chris Lattner 99c8ee2977 Transform a load from an undef/zero global into an undef/global even if we
have complex pointer manipulation going on.  This allows us to compile
stuff like this:

__m128i foo(__m128i x){
                static const unsigned int c_0[4] = { 0, 0, 0, 0 };
                __m128i v_Zero = _mm_loadu_si128((__m128i*)c_0);
                x  = _mm_unpacklo_epi8(x,  v_Zero);
                return x;
}

into:

_foo:
        xorps   %xmm1, %xmm1
        punpcklbw       %xmm1, %xmm0
        ret

llvm-svn: 41022
2007-08-11 18:48:48 +00:00
Chris Lattner a8e4b4bc7b when we see a unaligned load from an insufficiently aligned global or
alloca, increase the alignment of the load, turning it into an aligned load.

This allows us to compile:

#include <xmmintrin.h>
__m128i foo(__m128i x){
 static const unsigned int c_0[4] = { 0, 0, 0, 0 };
	  __m128i v_Zero = _mm_loadu_si128((__m128i*)c_0);
  x  = _mm_unpacklo_epi8(x,  v_Zero);
  return x;
}

into:

_foo:
	punpcklbw	_c_0.5944, %xmm0
	ret
	.data
	.lcomm	_c_0.5944,16,4		# c_0.5944

instead of:

_foo:
	movdqu	_c_0.5944, %xmm1
	punpcklbw	%xmm1, %xmm0
	ret
	.data
	.lcomm	_c_0.5944,16,2		# c_0.5944

llvm-svn: 40971
2007-08-09 19:05:49 +00:00
Nick Lewycky 8052019a20 It's safe to fold not of fcmp.
llvm-svn: 40870
2007-08-06 20:04:16 +00:00
Chris Lattner f0da7975ea at the end of instcombine, explicitly clear WorklistMap.
This shrinks it down to something small.  On the testcase
from PR1432, this speeds up instcombine from 0.7959s to 0.5000s,
(59%)

llvm-svn: 40840
2007-08-05 08:47:58 +00:00
Chandler Carruth 7132e00de7 This is the patch to provide clean intrinsic function overloading support in LLVM. It cleans up the intrinsic definitions and generally smooths the process for more complicated intrinsic writing. It will be used by the upcoming atomic intrinsics as well as vector and float intrinsics in the future.
This also changes the syntax for llvm.bswap, llvm.part.set, llvm.part.select, and llvm.ct* intrinsics. They are automatically upgraded by both the LLVM ASM reader and the bitcode reader. The test cases have been updated, with special tests added to ensure the automatic upgrading is supported.

llvm-svn: 40807
2007-08-04 01:51:18 +00:00
Chris Lattner dc2cf228ce Replacing a cast with another one does not reduce the number of
casts in the input.

llvm-svn: 40741
2007-08-02 17:23:38 +00:00
Chris Lattner 222b214be7 Disable an xform that causes an infinite loop. This fixes PR1594
llvm-svn: 40739
2007-08-02 16:56:32 +00:00
Chris Lattner 2740694450 wrap some long lines. Major offenders that are left include
gvn, gvnpre, dse, and predsimplify.  To see these, use:

  make check-line-length

llvm-svn: 40738
2007-08-02 16:53:43 +00:00
Chris Lattner b0418fc607 Enhance instcombine to be more aggressive about folding casts of
operations of casts.  This implements InstCombine/zext-fold.ll

llvm-svn: 40726
2007-08-02 06:11:14 +00:00
David Greene 17a5dfe6f7 New CallInst interface to address GLIBCXX_DEBUG errors caused by
indexing an empty std::vector.

Updates to all clients.

llvm-svn: 40660
2007-08-01 03:43:44 +00:00
Lauro Ramos Venancio 549e775e67 Fix a bug in GetKnownAlignment of packed structs.
llvm-svn: 40649
2007-07-31 20:13:21 +00:00
Reid Spencer dff9d69cfb Fix a typo/thinko.
llvm-svn: 40599
2007-07-30 19:53:57 +00:00
Chris Lattner 4512cd2cab completely remove a transformation that is unsafe in the face of
undefs.

llvm-svn: 40439
2007-07-23 17:10:17 +00:00
Devang Patel 5e39293e62 Apply temporary work around to fix llvm mis-compilation
reported in PR 1556.

llvm-svn: 40133
2007-07-21 00:34:29 +00:00
Chris Lattner d82e4a19cc this xform is already done by the constant folder.
llvm-svn: 40124
2007-07-20 22:06:41 +00:00
Dan Gohman e31a61eeca Optimize alignment of loads and stores.
llvm-svn: 40102
2007-07-20 16:34:21 +00:00
Dan Gohman 06c60b6032 Fix comments about vectors to use the current wording.
llvm-svn: 39921
2007-07-16 14:29:03 +00:00
Chris Lattner 640fd5124d Repair a regression in Transforms/InstCombine/mul.ll that Reid noticed.
llvm-svn: 39896
2007-07-16 04:15:34 +00:00