Commit Graph

231 Commits

Author SHA1 Message Date
Duncan Sands 44b8721de8 Executive summary: getTypeSize -> getTypeStoreSize / getABITypeSize.
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment.  This gives a primitive type for
which getTypeSize differed from getABITypeSize.  For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).

This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition).  Instead there is:

(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type.  For a primitive type, this is the minimum number
of bits.  For an i36 this is 36 bits.  For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.

(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it).  For an
i36 this is 40 bits, for an x86 long double it is 80 bits.  This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes).  There doesn't seem to be anything
corresponding to this in gcc.

(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment.  For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS.  This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes).  This is
TYPE_SIZE in gcc.

Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize.  This means that the size of an array
is the length times the getABITypeSize.  It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize.  Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case.  So alloca's and mallocs should use getABITypeSize.  Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.

Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.

In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases).  I will get around to auditing these too at some point,
but I could do with some help.

Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize.  I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers.  If someone wants to pack these types more
tightly they can always use a packed struct.

llvm-svn: 43620
2007-11-01 20:53:16 +00:00
Dale Johannesen 4d4e77af8e Rewrite sqrt and powi to use anyfloat. By popular demand.
llvm-svn: 42537
2007-10-02 17:43:59 +00:00
Dale Johannesen 25a00a63eb Add sqrt and powi intrinsics for long double.
llvm-svn: 42423
2007-09-28 01:08:20 +00:00
Dale Johannesen bed9dc423c Next round of APFloat changes.
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double.  Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)

llvm-svn: 41747
2007-09-06 18:13:44 +00:00
Chris Lattner 460e34afed constant fold ptrtoint(inttoptr) with target data when available. This allows
us to fold the entry block of PR1602 to false instead of:

br i1 icmp eq (i32 and (i32 ptrtoint (void (%struct.S*)* inttoptr (i64
1 to void (%struct.S*)*) to i32), i32 1), i32 0), label %cond_next, label
%cond_true

llvm-svn: 41023
2007-08-11 23:49:01 +00:00
Chris Lattner 7574ef3ac4 Handle functions with no name better.
llvm-svn: 40926
2007-08-08 16:07:23 +00:00
Chris Lattner 785f9986bd significantly speed up constant folding of calls (and thus all clients that use
ConstantFoldInstruction on calls) by avoiding Value::getName().  getName() constructs
and returns an std::string, which does heap allocation stuff.  This slightly speeds up
instcombine.

llvm-svn: 40924
2007-08-08 06:55:43 +00:00
Chandler Carruth 7132e00de7 This is the patch to provide clean intrinsic function overloading support in LLVM. It cleans up the intrinsic definitions and generally smooths the process for more complicated intrinsic writing. It will be used by the upcoming atomic intrinsics as well as vector and float intrinsics in the future.
This also changes the syntax for llvm.bswap, llvm.part.set, llvm.part.select, and llvm.ct* intrinsics. They are automatically upgraded by both the LLVM ASM reader and the bitcode reader. The test cases have been updated, with special tests added to ensure the automatic upgrading is supported.

llvm-svn: 40807
2007-08-04 01:51:18 +00:00
Dan Gohman 72efc04d8e Use ConstantFoldFP for folding all unary floating-point operations which may
have an error, and refector out the code for binary operators into
ConstantFoldBinaryFP and use it for all binary floating-point operations
which may have an error. These functions still rely exclusively on errno
to detect errors though.

llvm-svn: 39923
2007-07-16 15:26:22 +00:00
Reid Spencer add6123405 The bit counting intrinsics return i32 not the operand type. This fixes
last night's regression in SingleSource/UnitTests/2005-05-11-Popcount-ffs-fls

llvm-svn: 35556
2007-04-01 18:42:20 +00:00
Reid Spencer 6bba6c8143 For PR1297:
Support overloaded intrinsics bswap, ctpop, cttz, ctlz.

llvm-svn: 35547
2007-04-01 07:35:23 +00:00
Jeff Cohen b622c11f77 Unbreak VC++ build.
llvm-svn: 34917
2007-03-05 00:00:42 +00:00
Reid Spencer d84d35ba70 For PR1195:
Rename PackedType -> VectorType, ConstantPacked -> ConstantVector, and
PackedTyID -> VectorTyID. No functional changes.

llvm-svn: 34293
2007-02-15 02:26:10 +00:00
Chris Lattner b402e74fcd completely eliminate a temporary vector
llvm-svn: 34162
2007-02-10 20:33:15 +00:00
Chris Lattner c473d8e431 Privatize StructLayout::MemberOffsets, adding an accessor
llvm-svn: 34156
2007-02-10 19:55:17 +00:00
Reid Spencer 2341c22ec7 Changes to support making the shift instructions be true BinaryOperators.
This feature is needed in order to support shifts of more than 255 bits
on large integer types.  This changes the syntax for llvm assembly to
make shl, ashr and lshr instructions look like a binary operator:
   shl i32 %X, 1
instead of
   shl i32 %X, i8 1
Additionally, this should help a few passes perform additional optimizations.

llvm-svn: 33776
2007-02-02 02:16:23 +00:00
Chris Lattner 5f8809078f Fix a minor bug in my patch yesterday that broken ConstProp/bswap.ll
llvm-svn: 33704
2007-01-31 18:04:55 +00:00
Chris Lattner 64a2909f08 elimiante a temporary vector
llvm-svn: 33694
2007-01-31 04:42:05 +00:00
Chris Lattner 44d68b9af7 Move some symbolic constant folding code out of instcombine into a place
it can be used by multiple clients.  This specifically allows the inliner
to constant fold symbolically.

llvm-svn: 33687
2007-01-31 00:51:48 +00:00
Chris Lattner 2ae054adb0 move a bunch of constant folding code f rom Transforms/Utils/Local.cpp into
libanalysis/ConstantFolding.cpp.

llvm-svn: 33679
2007-01-30 23:45:45 +00:00
Chris Lattner 8dbea5454d adjust to constant folding api changes.
llvm-svn: 33673
2007-01-30 23:15:43 +00:00
Chris Lattner 26933ddb10 Constant fold llvm.powi.*. This speeds up tramp3d--v4 by 9.5%
llvm-svn: 33229
2007-01-15 06:27:37 +00:00
Chris Lattner d0c91e1c2e remove llvm.isunordered
llvm-svn: 32991
2007-01-07 08:19:47 +00:00
Reid Spencer c635f47d9a For PR950:
This patch replaces signed integer types with signless ones:
1. [US]Byte -> Int8
2. [U]Short -> Int16
3. [U]Int   -> Int32
4. [U]Long  -> Int64.
5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
   and other methods related to signedness. In a few places this warranted
   identifying the signedness information from other sources.

llvm-svn: 32785
2006-12-31 05:48:39 +00:00
Jeff Cohen cc08c83186 Unbreak VC++ build.
llvm-svn: 32113
2006-12-02 02:22:01 +00:00
Jim Laskey 61feeb90f9 Remove redundant <cmath>.
llvm-svn: 31561
2006-11-08 19:16:44 +00:00
Reid Spencer e0fc4dfc22 For PR950:
This patch implements the first increment for the Signless Types feature.
All changes pertain to removing the ConstantSInt and ConstantUInt classes
in favor of just using ConstantInt.

llvm-svn: 31063
2006-10-20 07:07:24 +00:00
Chris Lattner 7a708989df Constant fold sqrtf
llvm-svn: 28853
2006-06-17 18:17:52 +00:00
Reid Spencer b4f9a6f110 For PR411:
This patch is an incremental step towards supporting a flat symbol table.
It de-overloads the intrinsic functions by providing type-specific intrinsics
and arranging for automatically upgrading from the old overloaded name to
the new non-overloaded name. Specifically:
  llvm.isunordered -> llvm.isunordered.f32, llvm.isunordered.f64
  llvm.sqrt -> llvm.sqrt.f32, llvm.sqrt.f64
  llvm.ctpop -> llvm.ctpop.i8, llvm.ctpop.i16, llvm.ctpop.i32, llvm.ctpop.i64
  llvm.ctlz -> llvm.ctlz.i8, llvm.ctlz.i16, llvm.ctlz.i32, llvm.ctlz.i64
  llvm.cttz -> llvm.cttz.i8, llvm.cttz.i16, llvm.cttz.i32, llvm.cttz.i64
New code should not use the overloaded intrinsic names. Warnings will be
emitted if they are used.

llvm-svn: 25366
2006-01-16 21:12:35 +00:00
Nate Begeman 82049eba2c Add bswap intrinsics as documented in the Language Reference
llvm-svn: 25309
2006-01-14 01:25:24 +00:00
John Criswell 970af11af8 Move some constant folding functions into LLVMAnalysis since they are used
by Analysis and Transformation passes.

llvm-svn: 24038
2005-10-27 16:00:10 +00:00