Commit Graph

378 Commits

Author SHA1 Message Date
Sebastian Redl 31ad754c96 Instead of storing an ASTContext* in FunctionProtoTypes with computed noexcept specifiers, unique FunctionProtoTypes with a ContextualFoldingSet, as suggested by John McCall.
llvm-svn: 127568
2011-03-13 17:09:40 +00:00
Sebastian Redl fa453cfdc3 Propagate the new exception information to FunctionProtoType.
Change the interface to expose the new information and deal with the enormous fallout.
Introduce the new ExceptionSpecificationType value EST_DynamicNone to more easily deal with empty throw specifications.
Update the tests for noexcept and fix the various bugs uncovered, such as lack of tentative parsing support.

llvm-svn: 127537
2011-03-12 11:50:43 +00:00
John McCall 32ea969415 Use a slightly more semantic interface for emitting call arguments.
llvm-svn: 127494
2011-03-11 20:59:21 +00:00
NAKAMURA Takumi dd63436808 lib/CodeGen/CGCall.cpp: Don't invoke multiple Builder.CreateBitCast() on Builder.CreateMemCpy. Or we would see sideeffect incompatibility among gcc and clang.
llvm-svn: 127405
2011-03-10 14:02:21 +00:00
John McCall a738c25f5e Use the "undergoes default argument promotion" bit on parameters to
simplify the logic of initializing function parameters so that we don't need
both a variable declaration and a type in FunctionArgList.  This also means
that we need to propagate the CGFunctionInfo down in a lot of places rather
than recalculating it from the FAL.  There's more we can do to eliminate
redundancy here, and I've left FIXMEs behind to do it.

llvm-svn: 127314
2011-03-09 04:27:21 +00:00
Devang Patel 68a1525290 Encode argument numbering in debug info so that code generator can emit them in order.
This fixes few blocks.exp regressions.

llvm-svn: 126960
2011-03-03 20:13:15 +00:00
Tilmann Scheller 99cc30c371 Revert "Add CC_Win64ThisCall and set it in the necessary places."
This reverts commit 126863.

llvm-svn: 126886
2011-03-02 21:36:49 +00:00
Devang Patel bd6f7f9770 revert r126858.
llvm-svn: 126874
2011-03-02 20:31:22 +00:00
Tilmann Scheller 454464b491 Add CC_Win64ThisCall and set it in the necessary places.
llvm-svn: 126863
2011-03-02 19:36:23 +00:00
Devang Patel 31e5fb52d1 Encode argument numbering in debug info so that code generator can emit them in order.
This fixes few blocks.exp regressions.

Reapply r126795 with a fix (one character change) for gdb testsuite regressions.

llvm-svn: 126858
2011-03-02 19:11:22 +00:00
Devang Patel a54696de8a Revert r126794.
llvm-svn: 126848
2011-03-02 17:54:58 +00:00
Devang Patel 3bc2dedb40 Encode argument numbering in debug info so that code generator can emit them in order.
This fixes few blocks.exp regressions.

llvm-svn: 126795
2011-03-01 22:59:40 +00:00
Fariborz Jahanian cf7f66f16f objc IRGen for Next runtime message API.
The prototype for objc_msgSend() is technically variadic - 
`id objc_msgSend(id, SEL, ...)`. 
But all method calls should use a prototype that matches the method, 
not the prototype for objc_msgSend itself().
// rdar://9048030

llvm-svn: 126754
2011-03-01 17:28:13 +00:00
Devang Patel 25468059e5 Simplify test to check an aggregate argument that has non trivial constructor or destructor.
This patch rewrites r125142.

llvm-svn: 125632
2011-02-16 01:11:51 +00:00
Daniel Dunbar cb2b3d051d Fix family-friendly-o, tsk tsk.
llvm-svn: 125293
2011-02-10 18:10:07 +00:00
Daniel Dunbar 0bb0331d95 Driver/Frontend: Wire up -mregparm=.
llvm-svn: 125201
2011-02-09 17:54:19 +00:00
Devang Patel 14524e0f24 If an aggregate argument is passed indirectly because it has non trivial
destructor or copy constructor than let debug info know about it.

Radar 8945514.

llvm-svn: 125142
2011-02-09 00:37:30 +00:00
John McCall ad7c5c1657 Reorganize CodeGen{Function,Module} to eliminate the unfortunate
Block{Function,Module} base class.  Minor other refactorings.

Fixed a few address-space bugs while I was there.

llvm-svn: 125085
2011-02-08 08:22:06 +00:00
Ken Dyck 705ba07ef0 Replace calls to getTypeSize() and getTypeAlign() with their 'InChars'
counterparts where char units are needed.

llvm-svn: 123805
2011-01-19 01:58:38 +00:00
Benjamin Kramer acc6b4e2fd Simplify mem{cpy, move, set} creation with IRBuilder.
llvm-svn: 122634
2010-12-30 00:13:21 +00:00
Michael J. Spencer f5a1fbcdf3 Fix Whitespace.
llvm-svn: 116798
2010-10-19 06:39:39 +00:00
Daniel Dunbar 7b7c2937ef IRgen/ABI: Add support for realigning structures which are passed by indirect
reference.

llvm-svn: 114114
2010-09-16 20:42:02 +00:00
Dawn Perchik 335e16bad4 Add symantic support for the Pascal calling convention via
"__attribute((pascal))" or "__pascal" (and "_pascal" under
-fborland-extensions).  Support still needs to be added to llvm.

llvm-svn: 112939
2010-09-03 01:29:35 +00:00
John McCall 0d635f53a8 Re-commit r112916 with an additional fix for the self-host failures.
I've audited the remaining getFunctionInfo call sites.

llvm-svn: 112936
2010-09-03 01:26:39 +00:00
John McCall c32f94b4ce Revert r112916, it's breaking selfhost pretty badly.
llvm-svn: 112925
2010-09-03 00:40:45 +00:00
John McCall 12d3891a27 It's not safe to use the generic CXXMethodDecl overload of CGT::getFunctionInfo
to set up a destructor call, because ABIs can tweak these conventions.
Fixes rdar://problem/8386802.

llvm-svn: 112916
2010-09-03 00:01:57 +00:00
John McCall 5d865c3292 Teach IR generation to return 'this' from constructors and destructors
under the ARM ABI.

llvm-svn: 112588
2010-08-31 07:33:07 +00:00
Daniel Dunbar 2e442a00b3 IRgen: Switch more MakeAddr() users to MakeAddrLValue; this time for calls which were previously not computing the qualifier list. In most cases, I don't think it matters, but I believe this is conservatively more correct / consistent.
llvm-svn: 111717
2010-08-21 03:15:20 +00:00
Daniel Dunbar 0381634a61 IRgen: Change Emit{Load,Store}OfScalar to take a required Alignment argument and
update callers as best I can.
 - This is a work in progress, our alignment handling is very horrible / sketchy -- I am just aiming for monotonic improvement.
 - Serious review appreciated.

llvm-svn: 111707
2010-08-21 02:24:36 +00:00
Chris Lattner 8a2f3c778e fix PR5179 and correctly fix PR5831 to not miscompile.
The X86-64 ABI code didn't handle the case when a struct
would get classified and turn up as "NoClass INTEGER" for
example.  This is perfectly possible when the first slot
is all padding (e.g. due to empty base classes).  In this
situation, the first 8-byte doesn't take a register at all,
only the second 8-byte does.

This fixes this by enhancing the x86-64 abi stuff to allow
and handle this case, reverts the broken fix for PR5831,
and enhances the target independent stuff to be able to 
handle an argument value in registers being accessed at an
offset from the memory value.

This is the last x86-64 calling convention related miscompile
that I'm aware of.

llvm-svn: 109848
2010-07-30 04:02:24 +00:00
Chris Lattner 2cdfda44a1 fix a builder, why didn't clang++ catch this?
llvm-svn: 109735
2010-07-29 06:44:09 +00:00
Chris Lattner fe34c1d53e Kill off the 'coerce' ABI passing form. Now 'direct' and 'extend' always
have a "coerce to" type which often matches the default lowering of Clang
type to LLVM IR type, but the coerce case can be handled by making them
not be the same.

This simplifies things and fixes issues where X86-64 abi lowering would 
return coerce after making preferred types exactly match up.  This caused
us to compile:

typedef float v4f32 __attribute__((__vector_size__(16)));
v4f32 foo(v4f32 X) {
  return X+X;
}

into this code at -O0:

define <4 x float> @foo(<4 x float> %X.coerce) nounwind {
entry:
  %retval = alloca <4 x float>, align 16          ; <<4 x float>*> [#uses=2]
  %coerce = alloca <4 x float>, align 16          ; <<4 x float>*> [#uses=2]
  %X.addr = alloca <4 x float>, align 16          ; <<4 x float>*> [#uses=3]
  store <4 x float> %X.coerce, <4 x float>* %coerce
  %X = load <4 x float>* %coerce                  ; <<4 x float>> [#uses=1]
  store <4 x float> %X, <4 x float>* %X.addr
  %tmp = load <4 x float>* %X.addr                ; <<4 x float>> [#uses=1]
  %tmp1 = load <4 x float>* %X.addr               ; <<4 x float>> [#uses=1]
  %add = fadd <4 x float> %tmp, %tmp1             ; <<4 x float>> [#uses=1]
  store <4 x float> %add, <4 x float>* %retval
  %0 = load <4 x float>* %retval                  ; <<4 x float>> [#uses=1]
  ret <4 x float> %0
}

Now we get:

define <4 x float> @foo(<4 x float> %X) nounwind {
entry:
  %X.addr = alloca <4 x float>, align 16          ; <<4 x float>*> [#uses=3]
  store <4 x float> %X, <4 x float>* %X.addr
  %tmp = load <4 x float>* %X.addr                ; <<4 x float>> [#uses=1]
  %tmp1 = load <4 x float>* %X.addr               ; <<4 x float>> [#uses=1]
  %add = fadd <4 x float> %tmp, %tmp1             ; <<4 x float>> [#uses=1]
  ret <4 x float> %add
}

This implements rdar://8248065

llvm-svn: 109733
2010-07-29 06:26:06 +00:00
Chris Lattner 22326a10a7 dissolve some more complexity: make the x86-64 abi lowering code
compute its own preferred types instead of having CGT compute
them then pass them (circuituously) down into ABIInfo.

llvm-svn: 109726
2010-07-29 02:31:05 +00:00
Chris Lattner 458b2aaee0 now that ABIInfo depends on CGT, it has trivial access to such
things as TargetData, ASTContext, LLVMContext etc.  Stop passing
them through so many APIs.

llvm-svn: 109723
2010-07-29 02:16:43 +00:00
Chris Lattner 4b8585ef6a tidy up
llvm-svn: 109699
2010-07-28 23:46:15 +00:00
Chris Lattner ff941a666a some cleanups and get alignments correct for various coerce cases.
llvm-svn: 109607
2010-07-28 18:24:28 +00:00
Douglas Gregor 5cc2c8b9c3 Vectors are not integer types, so the type system should not classify
them as such. Type::is(Signed|Unsigned|)IntegerType() now return false
for vector types, and new functions
has(Signed|Unsigned|)IntegerRepresentation() cover integer types and
vector-of-integer types. This fixes a bunch of latent bugs.

Patch from Anton Yartsev!

llvm-svn: 109229
2010-07-23 15:58:24 +00:00
Devang Patel 65497583b5 Fix regression caused by r108911.
Do not override known debug loc with unknown debug loc.
This is tested by sections.exp in gdb testsuite.

llvm-svn: 109022
2010-07-21 18:08:50 +00:00
Dan Gohman 481e40c681 Use getDebugLoc and setDebugLoc instead of getDbgMetadata and setDbgMetadata,
avoiding MDNode overhead.

llvm-svn: 108911
2010-07-20 20:13:52 +00:00
Daniel Dunbar 6f2e839693 CodeGen/ObjC/NeXT: Fix Obj-C message send to match llvm-gcc when choosing
whether to use objc_msgSend_fpret; the choice is target dependent, not Obj-C ABI
dependent.
 - <rdar://problem/8139758> arm objc _objc_msgSend_fpret bug

llvm-svn: 108379
2010-07-14 23:39:36 +00:00
John McCall be349def4b Mark calls to 'throw()' functions as nounwind, and mark the functions nounwind
as well.

llvm-svn: 107858
2010-07-08 06:48:12 +00:00
John McCall bd30929e4d Validated by nightly-test runs on x86 and x86-64 darwin, including after
self-host.  Hopefully these results hold up on different platforms.  

I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions.  Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.

Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former.  Remove the need to track which cleanup scope a block is associated
with.

Document a lot of previously poorly-understood (by me, at least) behavior.

The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work.  Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however.  The HH is an unfortunate requirement of LLVM's EH IR.

llvm-svn: 107631
2010-07-06 01:34:17 +00:00
Chris Lattner ceddafb846 Generate fewer first class aggregate values for other
coerce cases (e.g. {double,int}) which avoids fastisel
bailing out at -O0.

llvm-svn: 107628
2010-07-05 20:41:41 +00:00
Chris Lattner c401de9998 in the "coerce" case, the ABI handling code ends up making the
alloca for an argument.  Make sure the argument gets the proper
decl alignment, which may be different than the type alignment.

This fixes PR7567

llvm-svn: 107627
2010-07-05 20:21:00 +00:00
Chris Lattner 0e7929f30c fix rdar://8147692 - yet another crash due to my abi work.
llvm-svn: 107387
2010-07-01 06:20:47 +00:00
Daniel Dunbar 6696e22cc9 IRgen: Fix debug info regression in r106970; when we eliminate the return value
store make sure to move the debug metadata from the store (which is actual
'return' statement location) to the return instruction (which otherwise would
have the function end location as its debug info).
 - Tested by gdb test suite.

llvm-svn: 107322
2010-06-30 21:27:58 +00:00
Chris Lattner 5c740f1523 Reapply:
r107173, "fix PR7519: after thrashing around and remembering how all this stuff"
r107216, "fix PR7523, which was caused by the ABI code calling ConvertType instead"

This includes a fix to make ConvertTypeForMem handle the "recursive" case, and call
it as such when lowering function types which have an indirect result.

llvm-svn: 107310
2010-06-30 19:14:05 +00:00
Daniel Dunbar e422266926 Revert r107173, "fix PR7519: after thrashing around and remembering how all this stuff", it broke bootstrap.
llvm-svn: 107232
2010-06-30 00:22:35 +00:00
Daniel Dunbar 8386469d7d Revert r107216, "fix PR7523, which was caused by the ABI code calling ConvertType instead", it is part of a boostrap breaking sequence.
llvm-svn: 107231
2010-06-30 00:22:30 +00:00
Chris Lattner 466b1419c6 fix PR7523, which was caused by the ABI code calling ConvertType instead
of ConvertTypeRecursive when it needed to in a few cases, causing pointer
types to get resolved at the wrong time.

llvm-svn: 107216
2010-06-29 22:39:04 +00:00
Chris Lattner 34d6281ae5 relax the CGFunctionInfo::CGFunctionInfo ctor to allow any sequence
of CanQualTypes to be passed in.

llvm-svn: 107176
2010-06-29 18:13:52 +00:00
Chris Lattner ab1e65e2ea fix PR7519: after thrashing around and remembering how all this stuff
works, the fix is quite simple: just make sure to call ConvertTypeRecursive
when the function type being lowered is in the midst of ConvertType.

llvm-svn: 107173
2010-06-29 17:56:33 +00:00
Chris Lattner e70a007b36 minor cleanups.
llvm-svn: 107150
2010-06-29 16:40:28 +00:00
Chris Lattner 1d7c9f7f4b Pass the LLVM IR version of argument types down into computeInfo.
This is somewhat annoying to do this at this level, but it avoids
having ABIInfo know depend on CodeGenTypes for a hint.

Nothing is using this yet, so no functionality change.

llvm-svn: 107111
2010-06-29 01:08:48 +00:00
Chris Lattner 9e748e9d6e add IR names to coerced arguments.
llvm-svn: 107105
2010-06-29 00:14:52 +00:00
Chris Lattner 15ec361bd6 make the argument passing stuff in the FCA case smarter still, by
avoiding making the FCA at all when the types exactly line up.  For
example, before we made:

%struct.DeclGroup = type { i64, i64 }

define i64 @_Z3foo9DeclGroup(i64, i64) nounwind {
entry:
  %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=3]
  %2 = insertvalue %struct.DeclGroup undef, i64 %0, 0 ; <%struct.DeclGroup> [#uses=1]
  %3 = insertvalue %struct.DeclGroup %2, i64 %1, 1 ; <%struct.DeclGroup> [#uses=1]
  store %struct.DeclGroup %3, %struct.DeclGroup* %D
  %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
  %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
  %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
  %tmp3 = load i64* %tmp2                         ; <i64> [#uses=1]
  %add = add nsw i64 %tmp1, %tmp3                 ; <i64> [#uses=1]
  ret i64 %add
}

... which has the pointless insertvalue, which fastisel hates, now we
make:

%struct.DeclGroup = type { i64, i64 }

define i64 @_Z3foo9DeclGroup(i64, i64) nounwind {
entry:
  %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=4]
  %2 = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
  store i64 %0, i64* %2
  %3 = getelementptr %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
  store i64 %1, i64* %3
  %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
  %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
  %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
  %tmp3 = load i64* %tmp2                         ; <i64> [#uses=1]
  %add = add nsw i64 %tmp1, %tmp3                 ; <i64> [#uses=1]
  ret i64 %add
}

This only kicks in when x86-64 abi lowering decides it likes us.

llvm-svn: 107104
2010-06-29 00:06:42 +00:00
Chris Lattner 3dd716c3c3 Change CGCall to handle the "coerce" case where the coerce-to type
is a FCA to pass each of the elements as individual scalars.  This
produces code fast isel is less likely to reject and is easier on
the optimizers.

For example, before we would compile:
struct DeclGroup { long NumDecls; char * Y; };
char * foo(DeclGroup D) {
  return D.NumDecls+D.Y;
}

to:
%struct.DeclGroup = type { i64, i64 }

define i64 @_Z3foo9DeclGroup(%struct.DeclGroup) nounwind {
entry:
  %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=3]
  store %struct.DeclGroup %0, %struct.DeclGroup* %D, align 1
  %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
  %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
  %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
  %tmp3 = load i64* %tmp2                         ; <i64> [#uses=1]
  %add = add nsw i64 %tmp1, %tmp3                 ; <i64> [#uses=1]
  ret i64 %add
}

Now we get:

%0 = type { i64, i64 }
%struct.DeclGroup = type { i64, i8* }

define i8* @_Z3foo9DeclGroup(i64, i64) nounwind {
entry:
  %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=3]
  %2 = insertvalue %0 undef, i64 %0, 0            ; <%0> [#uses=1]
  %3 = insertvalue %0 %2, i64 %1, 1               ; <%0> [#uses=1]
  %4 = bitcast %struct.DeclGroup* %D to %0*       ; <%0*> [#uses=1]
  store %0 %3, %0* %4, align 1
  %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
  %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
  %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
  %tmp3 = load i8** %tmp2                         ; <i8*> [#uses=1]
  %add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
  ret i8* %add.ptr
}

Elimination of the FCA inside the function is still-to-come.

llvm-svn: 107099
2010-06-28 23:44:11 +00:00
Chris Lattner d200eda487 make the trivial forms of CreateCoerced{Load|Store} trivial.
llvm-svn: 107091
2010-06-28 22:51:39 +00:00
Chris Lattner 5e016ae983 finally get around to doing a significant cleanup to irgen:
have CGF create and make accessible standard int32,int64 and 
intptr types.  This fixes a ton of 80 column violations 
introduced by LLVMContextification and cleans up stuff a lot.

llvm-svn: 106977
2010-06-27 07:15:29 +00:00
Chris Lattner 055097f024 If coercing something from int or pointer type to int or pointer type
(potentially after unwrapping it from a struct) do it without going through
memory.  We now compile:

struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D) {
  return D.NumDecls;
}

into:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %coerce.val.ii = trunc i64 %0 to i32            ; <i32> [#uses=1]
  store i32 %coerce.val.ii, i32* %coerce.dive
  %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp1 = load i32* %tmp                          ; <i32> [#uses=1]
  ret i32 %tmp1
}

instead of:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to i32*                  ; <i32*> [#uses=1]
  %2 = load i32* %1, align 1                      ; <i32> [#uses=1]
  store i32 %2, i32* %coerce.dive
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

... which is quite a bit less terrifying.

llvm-svn: 106975
2010-06-27 06:26:04 +00:00
Chris Lattner 895c52ba8b Same patch as the previous on the store side. Before we compiled this:
struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D) {
  return D.NumDecls;
}

to:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to %struct.DeclGroup*    ; <%struct.DeclGroup*> [#uses=1]
  %2 = load %struct.DeclGroup* %1, align 1        ; <%struct.DeclGroup> [#uses=1]
  store %struct.DeclGroup %2, %struct.DeclGroup* %D
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

which caused fast isel bailouts due to the FCA load/store of %2.  Now
we generate this just blissful code:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to i32*                  ; <i32*> [#uses=1]
  %2 = load i32* %1, align 1                      ; <i32> [#uses=1]
  store i32 %2, i32* %coerce.dive
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

This avoids fastisel bailing out and is groundwork for future patch.
This reduces bailouts on CGStmt.ll to 911 from 935.

llvm-svn: 106974
2010-06-27 06:04:18 +00:00
Chris Lattner 1cd6698a7c improve CreateCoercedLoad a bit to generate slightly less awful
IR when handling X86-64 by-value struct stuff.  For example, we
use to compile this:

struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D);
void bar(DeclGroup *D) {
  foo(*D);
}

into:

define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) ssp nounwind {
entry:
  %D.addr = alloca %struct.DeclGroup*, align 8    ; <%struct.DeclGroup**> [#uses=2]
  %agg.tmp = alloca %struct.DeclGroup, align 4    ; <%struct.DeclGroup*> [#uses=2]
  %tmp3 = alloca i64                              ; <i64*> [#uses=2]
  store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
  %tmp = load %struct.DeclGroup** %D.addr         ; <%struct.DeclGroup*> [#uses=1]
  %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
  %tmp2 = bitcast %struct.DeclGroup* %tmp to i8*  ; <i8*> [#uses=1]
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
  %0 = bitcast i64* %tmp3 to %struct.DeclGroup*   ; <%struct.DeclGroup*> [#uses=1]
  %1 = load %struct.DeclGroup* %agg.tmp           ; <%struct.DeclGroup> [#uses=1]
  store %struct.DeclGroup %1, %struct.DeclGroup* %0, align 1
  %2 = load i64* %tmp3                            ; <i64> [#uses=1]
  call void @_Z3foo9DeclGroup(i64 %2)
  ret void
}

which would cause fastisel to bail out due to the first class aggregate load %1.  With
this patch we now compile it into the (still awful):

define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind ssp noredzone {
entry:
  %D.addr = alloca %struct.DeclGroup*, align 8    ; <%struct.DeclGroup**> [#uses=2]
  %agg.tmp = alloca %struct.DeclGroup, align 4    ; <%struct.DeclGroup*> [#uses=2]
  %tmp3 = alloca i64                              ; <i64*> [#uses=2]
  store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
  %tmp = load %struct.DeclGroup** %D.addr         ; <%struct.DeclGroup*> [#uses=1]
  %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
  %tmp2 = bitcast %struct.DeclGroup* %tmp to i8*  ; <i8*> [#uses=1]
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
  %coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <i32*> [#uses=1]
  %0 = bitcast i64* %tmp3 to i32*                 ; <i32*> [#uses=1]
  %1 = load i32* %coerce.dive                     ; <i32> [#uses=1]
  store i32 %1, i32* %0, align 1
  %2 = load i64* %tmp3                            ; <i64> [#uses=1]
  %call = call i32 @_Z3foo9DeclGroup(i64 %2) noredzone ; <i32> [#uses=0]
  ret void
}

which doesn't bail out.  On CGStmt.ll, this reduces fastisel bail outs from 958 to 935,
and is the precursor of better things to come.

llvm-svn: 106973
2010-06-27 05:56:15 +00:00
Chris Lattner 3fcc790cd8 Change IR generation for return (in the simple case) to avoid doing silly
load/store nonsense in the epilog.  For example, for:

int foo(int X) {
  int A[100];
  return A[X];
}

we used to generate:

  %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]
  store i32 %tmp1, i32* %retval
  %0 = load i32* %retval                          ; <i32> [#uses=1]
  ret i32 %0
}

which codegen'd to this code:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	subq	$408, %rsp              ## imm = 0x198
	movl	%edi, 400(%rsp)
	movl	400(%rsp), %edi
	movslq	%edi, %rax
	movl	(%rsp,%rax,4), %edi
	movl	%edi, 404(%rsp)
	movl	404(%rsp), %eax
	addq	$408, %rsp              ## imm = 0x198
	ret

Now we generate:

  %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]
  ret i32 %tmp1
}

and:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	subq	$408, %rsp              ## imm = 0x198
	movl	%edi, 404(%rsp)
	movl	404(%rsp), %edi
	movslq	%edi, %rax
	movl	(%rsp,%rax,4), %eax
	addq	$408, %rsp              ## imm = 0x198
	ret

This actually does matter, cutting out 2000 lines of IR from CGStmt.ll 
for example.

Another interesting effect is that altivec.h functions which are dead
now get dce'd by the inliner.  Hence all the changes to 
builtins-ppc-altivec.c to ensure the calls aren't dead.

llvm-svn: 106970
2010-06-27 01:06:27 +00:00
Chris Lattner 726b3d09cd reduce indentation
llvm-svn: 106967
2010-06-26 23:13:19 +00:00
Anders Carlsson 04775f8413 Change EmitReferenceBindingToExpr to take a decl instead of a boolean.
llvm-svn: 106949
2010-06-26 16:35:32 +00:00
Chandler Carruth 8509824cdb Move CodeGenOptions.h *back* into Frontend. This should have been done when the
dependency edge was reversed such that CodeGen depends on Frontend.

llvm-svn: 106065
2010-06-15 23:19:56 +00:00
Eli Friedman c8731be34d Fix for PR7040: Don't try to compute the LLVM type for a function where it
isn't possible to compute.

This patch is mostly refactoring; the key change is the addition of the code
starting with the comment, "Check whether the function has a computable LLVM
signature."  The solution here is essentially the same as the way the
vtable code handles such functions.

llvm-svn: 105151
2010-05-30 06:03:20 +00:00
John McCall 23f6626262 Correctly pass aggregates by reference when emitting thunks.
llvm-svn: 104778
2010-05-26 22:34:26 +00:00
Douglas Gregor a941dcae16 Add support for Microsoft's __thiscall, from Steven Watanabe!
llvm-svn: 104026
2010-05-18 16:57:00 +00:00
David Chisnall ff5f88c38e As per Chris' request, return the Instruction from EmitCall and add the metadata in the caller.
llvm-svn: 102862
2010-05-02 13:41:58 +00:00
David Chisnall 9eecafa480 Tweaked EmitCall() to permit the caller to provide some metadata to attach to the call site.
Used this in CGObjCGNU to attach metadata about message sends to permit speculative inlining.

llvm-svn: 102833
2010-05-01 11:15:56 +00:00
Chris Lattner 9cffdf1331 don't slap noalias attribute on stret result arguments.
This mirror's Dan's patch for llvm-gcc in r97989, and
fixes the miscompilation in PR6525.  There is some contention
over whether this is the right thing to do, but it is the
conservative answer and demonstrably fixes a miscompilation.

llvm-svn: 101877
2010-04-20 05:44:43 +00:00
Anders Carlsson 11e5140db9 Vtable -> VTable renames across the board.
llvm-svn: 101666
2010-04-17 20:15:18 +00:00
Rafael Espindola 49b85ab6e6 Remember the regparm attribute in FunctionType::ExtInfo.
Fixes PR3782.

llvm-svn: 99940
2010-03-30 22:15:11 +00:00
Rafael Espindola c50c27cca8 the big refactoring bits of PR3782.
This introduces FunctionType::ExtInfo to hold the calling convention and the
noreturn attribute. The next patch will extend it to include the regparm
attribute and fix the bug.

llvm-svn: 99920
2010-03-30 20:24:48 +00:00
John McCall 39ec71f2e9 When mapping restrict to noalias, look for 'restrict' on the parameter variable
instead of the canonical parameter type (which has correctly dropped all such
direct qualifiers).  Fixes PR6695.

llvm-svn: 99688
2010-03-27 00:47:27 +00:00
John McCall 2da83a3a38 Use the power of types to track down another canonicalization bug in
the ABI-computation interface.  Fixes <rdar://problem/7691046>.

llvm-svn: 97197
2010-02-26 00:48:12 +00:00
John McCall 8ee376f08a Canonicalize parameter and return types before computing ABI info. Eliminates
a common source of oddities and, in theory, removes some redundant ABI
computations.  Also fixes a miscompile I introduced yesterday by refactoring
some code and causing a slightly different code path to be taken that
didn't perform *parameter* type canonicalization, just normal type
canonicalization;  this in turn caused a bit of ABI code to misfire because
it was looking for 'double' or 'float' but received 'const float'.

llvm-svn: 97030
2010-02-24 07:14:12 +00:00
John McCall f8ff7b9fd1 Perform two more constructor/destructor code-size optimizations:
1) emit base destructors as aliases to their unique base class destructors
under some careful conditions.  This is enabled for the same targets that can
support complete-to-base aliases, i.e. not darwin.

2) Emit non-variadic complete constructors for classes with no virtual bases
as calls to the base constructor.  This is enabled on all targets and in
theory can trigger in situations that the alias optimization can't (mostly
involving virtual bases, mostly not yet supported).

These are bundled together because I didn't think it worthwhile to split them,
not because they really need to be.

llvm-svn: 96842
2010-02-23 00:48:20 +00:00
Daniel Dunbar a7566f163a IRgen: Add CreateMemTemp, for creating an temporary memory object for a particular type, and flood fill. - CreateMemTemp sets the alignment on the alloca correctly, which fixes a great many places in IRgen where we were doing the wrong thing.
- This fixes many many more places than the test case, but my feeling is we need to audit alignment systematically so I'm not inclined to try hard to test the individual fixes in this patch. If this bothers you, patches welcome!

PR6240.

llvm-svn: 95648
2010-02-09 02:48:28 +00:00
Anders Carlsson 6710c5351e Use the correct function info for constructors when applying function attributes. Fixes PR6245.
llvm-svn: 95474
2010-02-06 02:44:09 +00:00
John McCall ab26cfa58d Standardize the parsing of function type attributes in a way that
follows (as conservatively as possible) gcc's current behavior:  attributes
written on return types that don't apply there are applied to the function
instead, etc.  Only parse CC attributes as type attributes, not as decl attributes;
don't accepet noreturn as a decl attribute on ValueDecls, either (it still
needs to apply to other decls, like blocks).  Consistently consume CC/noreturn
information throughout codegen;  enforce this by removing their default values
in CodeGenTypes::getFunctionInfo().

llvm-svn: 95436
2010-02-05 21:31:56 +00:00
Anders Carlsson 3b227bd629 Revert the new reference binding code; I came up with a way simpler solution for the reference binding bug that is preventing self-hosting.
llvm-svn: 95223
2010-02-03 16:38:03 +00:00
Anders Carlsson ab0ddb57b1 Start creating CXXBindReferenceExpr nodes when binding complex types to references.
llvm-svn: 94964
2010-01-31 18:34:51 +00:00
Anders Carlsson 5d8645b150 Simplify EmitLValueForField - we can get whether the field is part of a union or not from the FieldDecl (through its DeclContext).
llvm-svn: 94798
2010-01-29 05:05:36 +00:00
Anders Carlsson 1749083e2e Fill in the return value slot in CGExprAgg::VisitCallExpr. This takes us halfway towards fixing PR5824.
llvm-svn: 92142
2009-12-24 20:40:36 +00:00
Anders Carlsson 61a401caec Pass ReturnValueSlot to EmitCall. No functionality change yet.
llvm-svn: 92138
2009-12-24 19:25:24 +00:00
Nuno Lopes 7251327d75 implement PR5274: mark 'restrict' parameters as noalias
llvm-svn: 90778
2009-12-07 18:30:06 +00:00
Eli Friedman 4b1942cb8b Make functions returning a struct indirectly evaluate the returned struct
directly into the sret pointer. This is an optimization in C, but is required
for correctness in C++ for classes with a non-trivial copy constructor.

llvm-svn: 90526
2009-12-04 02:43:40 +00:00
Anders Carlsson 82ba57c8f0 Add VTT parameter to base ctors/dtors with virtual bases. (They aren't used yet).
llvm-svn: 89835
2009-11-25 03:15:49 +00:00
Anders Carlsson 6445773279 It is common for vtables to contain pointers to functions that have either incomplete return types or incomplete argument types.
Handle this by returning the llvm::OpaqueType for those cases, which CodeGenModule::GetOrCreateLLVMFunction knows about, and treats as being an "incomplete function".

llvm-svn: 89736
2009-11-24 05:08:52 +00:00
Anders Carlsson 0d82fa66a5 The ssp and sspreq function attributes should only be applied to function definitions, not declarations or calls.
llvm-svn: 88915
2009-11-16 16:56:03 +00:00
Chandler Carruth bc55fe26c6 Move CompileOptions -> CodeGenOptions, and sink it into the CodeGen library.
This resolves the layering violation where CodeGen depended on Frontend.

llvm-svn: 86998
2009-11-12 17:24:48 +00:00
Daniel Dunbar c369d73405 Set OptimizeForSize LLVM function attribute with -Os.
llvm-svn: 85278
2009-10-27 19:48:08 +00:00
Daniel Dunbar b5aacc282c Twinify CodeGenFunction::CreateTempAlloca
llvm-svn: 84456
2009-10-19 01:21:05 +00:00
Benjamin Kramer dde0fee82e Use new predicates for some type equality tests.
llvm-svn: 83303
2009-10-05 13:47:21 +00:00
Anders Carlsson 2ee3c011d9 Implement code generation of member function pointer calls. Fixes PR5121.
llvm-svn: 83271
2009-10-03 19:43:08 +00:00
John McCall 8ccfcb51ee Refactor the representation of qualifiers to bring ExtQualType out of the
Type hierarchy.  Demote 'volatile' to extended-qualifier status.  Audit our
use of qualifiers and fix a few places that weren't dealing with qualifiers
quite right;  many more remain.

llvm-svn: 82705
2009-09-24 19:53:00 +00:00
John McCall 9dd450bb78 Change all the Type::getAsFoo() methods to specializations of Type::getAs().
Several of the existing methods were identical to their respective
specializations, and so have been removed entirely.  Several more 'leaf'
optimizations were introduced.

The getAsFoo() methods which imposed extra conditions, like
getAsObjCInterfacePointerType(), have been left in place.

llvm-svn: 82501
2009-09-21 23:43:11 +00:00
Anders Carlsson 20759ad54c x86-64 ABI: If a type is a C++ record with either a non-trivial destructor or a non-trivial copy constructor, it should be passed in a pointer. Daniel, plz review.
llvm-svn: 82050
2009-09-16 15:53:40 +00:00
Daniel Dunbar 0ef3479cb7 Change CodeGenModule::ConstructTypeAttributes to return the calling convention
to use, and allow the ABI implementation to override the calling convention.

llvm-svn: 81593
2009-09-12 00:59:20 +00:00
Daniel Dunbar bbaeca4fef Set the calling convention based on the CGFunctionInfo.
llvm-svn: 81582
2009-09-11 22:25:00 +00:00
Daniel Dunbar 7feafc70d9 Add CallingConvention argument to CGFunctionInfo.
- Currently unused.

llvm-svn: 81581
2009-09-11 22:24:53 +00:00
Mike Stump 11289f4280 Remove tabs, and whitespace cleanups.
llvm-svn: 81346
2009-09-09 15:08:12 +00:00
Owen Anderson 41a750271b Update for LLVM API change.
llvm-svn: 78946
2009-08-13 21:57:51 +00:00
Ryan Flynn 1f1fdc070e map previously ignored __attribute((malloc)) to noalias attribute of llvm function's return
llvm-svn: 78541
2009-08-09 20:07:29 +00:00
Anders Carlsson b8be93fc92 Add support for global initializers.
llvm-svn: 78515
2009-08-08 23:24:23 +00:00
Daniel Dunbar 4074b93184 Use Twine instead of utostr
llvm-svn: 77848
2009-08-02 01:43:57 +00:00
Owen Anderson 0b75f23b94 Update for LLVM API change.
llvm-svn: 77722
2009-07-31 20:28:54 +00:00
John McCall caa1945306 Allow functions to be marked "implicit return zero" and so mark main().
Codegen by initializing the return value with its LLVM type's null value.

llvm-svn: 77288
2009-07-28 01:00:58 +00:00
Owen Anderson 170229f68d Update for LLVM API change, and contextify a bunch of related stuff.
llvm-svn: 75705
2009-07-14 23:10:40 +00:00
Argyrios Kyrtzidis cfbfe78e9e De-ASTContext-ify DeclContext.
Remove ASTContext parameter from DeclContext's methods. This change cascaded down to other Decl's methods and changes to call sites started "escalating".
Timings using pre-tokenized "cocoa.h" showed only a ~1% increase in time run between and after this commit.

llvm-svn: 74506
2009-06-30 02:36:12 +00:00
Argyrios Kyrtzidis b4b64ca752 Remove the ASTContext parameter from the attribute-related methods of Decl.
The implementations of these methods can Use Decl::getASTContext() to get the ASTContext.

This commit touches a lot of files since call sites for these methods are everywhere.
I used pre-tokenized "carbon.h" and "cocoa.h" headers to do some timings, and there was no real time difference between before the commit and after it.

llvm-svn: 74501
2009-06-30 02:34:44 +00:00
Bill Wendling 1835107ed0 Make the StackProtector bitfield use enums instead of obscure numbers.
llvm-svn: 74414
2009-06-28 23:01:01 +00:00
Bill Wendling d63bbadbef Add stack protector support to clang. This generates the 'ssp' and 'sspreq'
function attributes. There are predefined macros that are defined when stack
protectors are used: __SSP__=1 with -fstack-protector and __SSP_ALL__=2 with
-fstack-protector-all.

llvm-svn: 74405
2009-06-28 07:36:13 +00:00
Chris Lattner 4c8da96ea9 fix PR4423.
llvm-svn: 73938
2009-06-23 01:38:41 +00:00
Douglas Gregor 78bd61f661 Move the static DeclAttrs map into ASTContext. Fixes <rdar://problem/6983177>.
llvm-svn: 73702
2009-06-18 16:11:24 +00:00
Chris Lattner 4ca97c3b9e Fix PR4372, another case where non-prototyped functions can prevent
always_inline from working.

llvm-svn: 73273
2009-06-13 00:26:38 +00:00
Anton Korobeynikov 18adbf5f07 Add new ABIArgInfo kind: Extend. This allows target to implement its own argument
zero/sign extension logic (consider, e.g. target has only 64 bit registers and thus
i32's should be extended as well).

llvm-svn: 72998
2009-06-06 09:36:29 +00:00
Anton Korobeynikov 244360d62b Factor out TargetABIInfo stuff into separate file. No functionality change.
llvm-svn: 72962
2009-06-05 22:08:42 +00:00
Devang Patel 9e24386c65 Set function Attribute::NoImplicitFloat appropriately.
llvm-svn: 72961
2009-06-05 22:05:48 +00:00
Daniel Dunbar 4be99ff767 ABI handling: Fix nasty thinko where IRgen could generate an out-of-bounds read
when generating a coercion for ABI handling purposes.
 - This may only manifest itself when building at -O0, but the practical effect
   is that other arguments may get clobbered.

 - <rdar://problem/6930451> [irgen] ABI coercion clobbers other arguments

llvm-svn: 72932
2009-06-05 07:58:54 +00:00
Devang Patel 6e467b1a46 Set function attribute llvm::Attribute::NoRedZone appropriately.
llvm-svn: 72902
2009-06-04 23:32:02 +00:00
Daniel Dunbar 1518b64ddc When trying to pass an argument on the stack, assume LLVM will do the right
thing for non-aggregate types.
 - Otherwise we unnecessarily pin values to the stack and currently end up
   triggering a backend bug in one case.

 - This loose cooperation with LLVM to implement the ABI is pretty ugly.

 - <rdar://problem/6918722> [irgen] clang miscompile of many pointer varargs on
   x86-64

llvm-svn: 72419
2009-05-26 16:37:37 +00:00
Daniel Dunbar 499f3f9c5d x86_64 ABI: Account for sret parameters consuming an integer register.
- PR4242.

llvm-svn: 72268
2009-05-22 17:33:44 +00:00
Torok Edwin 5b34933b90 Set correct calling convention even if there is a bitcast in the way.
This attempts to fix PR4239.

llvm-svn: 72251
2009-05-22 07:25:06 +00:00
Jay Foad 7d0479f2c2 Use v.data() instead of &v[0] when SmallVector v might be empty.
llvm-svn: 72210
2009-05-21 09:52:38 +00:00
Anders Carlsson 6f5a015bd9 Add EmitReferenceBindingToExpr. Have EmitCallArg use it for now. Doesn't support anything but at least we don't crash ;)
llvm-svn: 72147
2009-05-20 00:24:07 +00:00
Anders Carlsson 8370964257 Pass the destination QualType to EmitStoreOfScalar. No functionality change.
llvm-svn: 72118
2009-05-19 18:50:41 +00:00
Eli Friedman cec35d7e6a Clean up some unnecessary includes.
llvm-svn: 72101
2009-05-19 04:30:57 +00:00
Mike Stump 18bb9284ff Reflow some comments.
llvm-svn: 71937
2009-05-16 07:57:57 +00:00
Daniel Dunbar ffdb8439d7 ABI handling: Fix invalid assertion, it is possible for a valid
coercion to be specified which truncates padding bits. It would be
nice to still have the assert, but we don't have any API call for the
unpadding size of a type yet.

llvm-svn: 71695
2009-05-13 18:54:26 +00:00
Chris Lattner bea5b622be static methods don't get this pointers.
llvm-svn: 71586
2009-05-12 20:27:19 +00:00
Daniel Dunbar bbfd054746 Darwin x86-32 ABI: Now that structure passing is farther along, we
don't need special treatment for unions.

llvm-svn: 71559
2009-05-12 17:00:20 +00:00
Daniel Dunbar 203e2e8dd8 x86-64 ABI: clang incorrectly passes union { long double, float } in
register.
 - Merge algorithm was returning MEMORY as it should.

llvm-svn: 71556
2009-05-12 15:22:40 +00:00
Daniel Dunbar 097353cbb5 Darwin x86-32: Multi-dimensional arrays were not handled correctly,
spotted by Eli!

llvm-svn: 71490
2009-05-11 23:01:34 +00:00
Daniel Dunbar 2ce6b3f91c Darwin x86_32: Treat records with unnamed bit-fields as "empty".
llvm-svn: 71461
2009-05-11 18:58:49 +00:00
Duncan Sands c76fe8b611 Correct for renaming PaddedSize -> AllocSize in
LLVM.

llvm-svn: 71350
2009-05-09 07:08:47 +00:00
Daniel Dunbar b997f3bcc3 x86_64 ABI: Ignore padding bit-fields during classification.
- {return-types,single-args}-{32,64} pass the first 1k ABI tests with
   bit-fields enabled.

llvm-svn: 71272
2009-05-08 22:26:44 +00:00
Daniel Dunbar 4752783057 Darwin x86_32: When coercing a "single element" structure, make sure
to use a wide enough type. This might be wider than the "single
element"'s type in the presence of padding bit-fields.
 - Darwin x86_32 now passes the first 1k ABI tests with bit-field
   generation enabled.

llvm-svn: 71270
2009-05-08 21:30:11 +00:00
Daniel Dunbar fdda3501a0 Darwin x86_32: Ignore padding bit-fields when looking for "single
element" structures.

llvm-svn: 71266
2009-05-08 21:04:47 +00:00
Daniel Dunbar 4861346c44 Darwin x86_32: Improve bit-field handling for returning records.
- This turns out to be a no-op now that most of the handling for
   everything else is in place.

llvm-svn: 71261
2009-05-08 20:55:49 +00:00
Daniel Dunbar 85f4028f2e Darwin x86_32: Ignore arrays of empty structures inside records.
- This eliminates 5/1000 failures on return-types-32, on the current
   ABITest config.

llvm-svn: 71250
2009-05-08 20:21:04 +00:00
Chris Lattner b822fad20f fix i128 to return in 2 64-bit registers (rax/rdx on x86-64)
llvm-svn: 70481
2009-04-30 06:22:07 +00:00
Chris Lattner f122cef4df initial support for __[u]int128_t, which should be basically
compatible with VC++ and GCC.  The codegen/mangling angle hasn't
been fully ironed out yet.  Note that we accept int128_t even in
32-bit mode, unlike gcc.

llvm-svn: 70464
2009-04-30 02:43:43 +00:00
Daniel Dunbar 01910a50c4 x86-32 ABI: Fix crash on return of structure with flexible array
member.

Also, spell bitfield more consistently as bit-field.

llvm-svn: 70220
2009-04-27 18:31:32 +00:00
Eli Friedman 1c4a175aef Remove getIntegerConstantExprValue in favor of using EvaluateAsInt.
llvm-svn: 70145
2009-04-26 19:19:15 +00:00
Sanjiv Gupta 4c5dfd3c45 Pass and return aggregate types directly to function calls.
llvm-svn: 69668
2009-04-21 06:01:16 +00:00
Anders Carlsson 603d6aff8b Make CodeGenFunction::EmitCallArgs a template function that takes a generic "Type Info" parameter. The type info parameter knows how to iterate over its arguments.
llvm-svn: 69469
2009-04-18 20:20:22 +00:00
Daniel Dunbar 4184ac847f Update to use hasAttr() instead of getAttr().
- No functionality change.

llvm-svn: 68987
2009-04-13 21:08:27 +00:00