Commit Graph

14335 Commits

Author SHA1 Message Date
Argyrios Kyrtzidis b5288de67c Refactor PCH reading/writing of template arguments passed to expressions.
llvm-svn: 106997
2010-06-28 09:31:48 +00:00
Argyrios Kyrtzidis ddf5f211d0 Fix PCH emitting/reading for template arguments that contain expressions.
llvm-svn: 106996
2010-06-28 09:31:42 +00:00
Argyrios Kyrtzidis 0b0369a6b3 Fix various bugs in recent commits for C++ PCH.
llvm-svn: 106995
2010-06-28 09:31:34 +00:00
Chandler Carruth 2d69ec7a72 Partial fix for PR7267 based on comments by John McCall on an earlier patch.
This is more targeted, as it simply provides toggle actions for the parser to
turn access checking on and off. We then use these to suppress access checking
only while we parse the template-id (included scope specifier) of an explicit
instantiation and explicit specialization of a class template. The
specialization behavior is an extension, as it seems likely a defect that the
standard did not exempt them as it does explicit instantiations.

This allows the very common practice of specializing trait classes to work for
private, internal types. This doesn't address instantiating or specializing
function templates, although those apparently already partially work.

The naming and style for the Action layer isn't my favorite, comments and
suggestions would be appreciated there.

llvm-svn: 106993
2010-06-28 08:39:25 +00:00
Jordy Rose 61176897ba Pointer comparisons (and pointer-pointer subtraction). Basically filling in SimpleSValuator::EvalBinOpLL().
llvm-svn: 106992
2010-06-28 08:26:15 +00:00
Chandler Carruth b6f991787b Suppress diagnosing access violations while looking up deallocation functions
much as we already do for allocation function lookup. Explicitly check access
for the function we actually select in one case that was previously missing,
but being caught behind the blanket diagnostics for all overload candidates.
This fixs PR7436.

llvm-svn: 106986
2010-06-28 00:30:51 +00:00
Rafael Espindola b1ef8ffb15 Use softfp for linux gnueabi, keep the warning for everything else.
llvm-svn: 106984
2010-06-27 18:29:21 +00:00
Anders Carlsson 3f48c603fb Correctly destroy reference temporaries with global storage. Remove ErrorUnsupported call when binding a global reference to a non-lvalue. Fixes PR7326.
llvm-svn: 106983
2010-06-27 17:52:15 +00:00
Anders Carlsson 18c205ecdf Add a CreateReferenceTemporary that will do the right thing for variables with global storage.
llvm-svn: 106982
2010-06-27 17:23:46 +00:00
Anders Carlsson 2969c8c69d Simplify CodeGenFunction::EmitReferenceBindingToExpr as a first step towards fixing PR7326.
llvm-svn: 106981
2010-06-27 16:56:04 +00:00
Anders Carlsson ca68d357d4 Reduce indentation.
llvm-svn: 106980
2010-06-27 15:24:55 +00:00
Chris Lattner 818efb64a3 misc tidying
llvm-svn: 106978
2010-06-27 07:40:06 +00:00
Chris Lattner 5e016ae983 finally get around to doing a significant cleanup to irgen:
have CGF create and make accessible standard int32,int64 and 
intptr types.  This fixes a ton of 80 column violations 
introduced by LLVMContextification and cleans up stuff a lot.

llvm-svn: 106977
2010-06-27 07:15:29 +00:00
Chris Lattner e000907e13 tidy up OrderGlobalInits
llvm-svn: 106976
2010-06-27 06:32:58 +00:00
Chris Lattner 055097f024 If coercing something from int or pointer type to int or pointer type
(potentially after unwrapping it from a struct) do it without going through
memory.  We now compile:

struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D) {
  return D.NumDecls;
}

into:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %coerce.val.ii = trunc i64 %0 to i32            ; <i32> [#uses=1]
  store i32 %coerce.val.ii, i32* %coerce.dive
  %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp1 = load i32* %tmp                          ; <i32> [#uses=1]
  ret i32 %tmp1
}

instead of:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to i32*                  ; <i32*> [#uses=1]
  %2 = load i32* %1, align 1                      ; <i32> [#uses=1]
  store i32 %2, i32* %coerce.dive
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

... which is quite a bit less terrifying.

llvm-svn: 106975
2010-06-27 06:26:04 +00:00
Chris Lattner 895c52ba8b Same patch as the previous on the store side. Before we compiled this:
struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D) {
  return D.NumDecls;
}

to:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to %struct.DeclGroup*    ; <%struct.DeclGroup*> [#uses=1]
  %2 = load %struct.DeclGroup* %1, align 1        ; <%struct.DeclGroup> [#uses=1]
  store %struct.DeclGroup %2, %struct.DeclGroup* %D
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

which caused fast isel bailouts due to the FCA load/store of %2.  Now
we generate this just blissful code:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to i32*                  ; <i32*> [#uses=1]
  %2 = load i32* %1, align 1                      ; <i32> [#uses=1]
  store i32 %2, i32* %coerce.dive
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

This avoids fastisel bailing out and is groundwork for future patch.
This reduces bailouts on CGStmt.ll to 911 from 935.

llvm-svn: 106974
2010-06-27 06:04:18 +00:00
Chris Lattner 1cd6698a7c improve CreateCoercedLoad a bit to generate slightly less awful
IR when handling X86-64 by-value struct stuff.  For example, we
use to compile this:

struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D);
void bar(DeclGroup *D) {
  foo(*D);
}

into:

define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) ssp nounwind {
entry:
  %D.addr = alloca %struct.DeclGroup*, align 8    ; <%struct.DeclGroup**> [#uses=2]
  %agg.tmp = alloca %struct.DeclGroup, align 4    ; <%struct.DeclGroup*> [#uses=2]
  %tmp3 = alloca i64                              ; <i64*> [#uses=2]
  store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
  %tmp = load %struct.DeclGroup** %D.addr         ; <%struct.DeclGroup*> [#uses=1]
  %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
  %tmp2 = bitcast %struct.DeclGroup* %tmp to i8*  ; <i8*> [#uses=1]
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
  %0 = bitcast i64* %tmp3 to %struct.DeclGroup*   ; <%struct.DeclGroup*> [#uses=1]
  %1 = load %struct.DeclGroup* %agg.tmp           ; <%struct.DeclGroup> [#uses=1]
  store %struct.DeclGroup %1, %struct.DeclGroup* %0, align 1
  %2 = load i64* %tmp3                            ; <i64> [#uses=1]
  call void @_Z3foo9DeclGroup(i64 %2)
  ret void
}

which would cause fastisel to bail out due to the first class aggregate load %1.  With
this patch we now compile it into the (still awful):

define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind ssp noredzone {
entry:
  %D.addr = alloca %struct.DeclGroup*, align 8    ; <%struct.DeclGroup**> [#uses=2]
  %agg.tmp = alloca %struct.DeclGroup, align 4    ; <%struct.DeclGroup*> [#uses=2]
  %tmp3 = alloca i64                              ; <i64*> [#uses=2]
  store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
  %tmp = load %struct.DeclGroup** %D.addr         ; <%struct.DeclGroup*> [#uses=1]
  %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
  %tmp2 = bitcast %struct.DeclGroup* %tmp to i8*  ; <i8*> [#uses=1]
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
  %coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <i32*> [#uses=1]
  %0 = bitcast i64* %tmp3 to i32*                 ; <i32*> [#uses=1]
  %1 = load i32* %coerce.dive                     ; <i32> [#uses=1]
  store i32 %1, i32* %0, align 1
  %2 = load i64* %tmp3                            ; <i64> [#uses=1]
  %call = call i32 @_Z3foo9DeclGroup(i64 %2) noredzone ; <i32> [#uses=0]
  ret void
}

which doesn't bail out.  On CGStmt.ll, this reduces fastisel bail outs from 958 to 935,
and is the precursor of better things to come.

llvm-svn: 106973
2010-06-27 05:56:15 +00:00
Jordy Rose 7f8ea4d677 Implicitly compare symbolic expressions to zero when they're being used as constraints. Part of PR7491.
llvm-svn: 106972
2010-06-27 01:20:56 +00:00
Chris Lattner 3fcc790cd8 Change IR generation for return (in the simple case) to avoid doing silly
load/store nonsense in the epilog.  For example, for:

int foo(int X) {
  int A[100];
  return A[X];
}

we used to generate:

  %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]
  store i32 %tmp1, i32* %retval
  %0 = load i32* %retval                          ; <i32> [#uses=1]
  ret i32 %0
}

which codegen'd to this code:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	subq	$408, %rsp              ## imm = 0x198
	movl	%edi, 400(%rsp)
	movl	400(%rsp), %edi
	movslq	%edi, %rax
	movl	(%rsp,%rax,4), %edi
	movl	%edi, 404(%rsp)
	movl	404(%rsp), %eax
	addq	$408, %rsp              ## imm = 0x198
	ret

Now we generate:

  %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]
  ret i32 %tmp1
}

and:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	subq	$408, %rsp              ## imm = 0x198
	movl	%edi, 404(%rsp)
	movl	404(%rsp), %edi
	movslq	%edi, %rax
	movl	(%rsp,%rax,4), %eax
	addq	$408, %rsp              ## imm = 0x198
	ret

This actually does matter, cutting out 2000 lines of IR from CGStmt.ll 
for example.

Another interesting effect is that altivec.h functions which are dead
now get dce'd by the inliner.  Hence all the changes to 
builtins-ppc-altivec.c to ensure the calls aren't dead.

llvm-svn: 106970
2010-06-27 01:06:27 +00:00
Chris Lattner 726b3d09cd reduce indentation
llvm-svn: 106967
2010-06-26 23:13:19 +00:00
Chris Lattner 6c5abe88bf Implement rdar://7530813 - collapse multiple GEP instructions in IRgen
This avoids generating two gep's for common array operations.  Before
we would generate something like:

  %tmp = load i32* %X.addr                        ; <i32> [#uses=1]
  %arraydecay = getelementptr inbounds [100 x i32]* %A, i32 0, i32 0 ; <i32*> [#uses=1]
  %arrayidx = getelementptr inbounds i32* %arraydecay, i32 %tmp ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]

Now we generate:

  %tmp = load i32* %X.addr                        ; <i32> [#uses=1]
  %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i32 %tmp ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]

Less IR is better at -O0.

llvm-svn: 106966
2010-06-26 23:03:20 +00:00
Ted Kremenek f00eac5cff Allow '__extension__' to be analyzed in a lvalue context.
llvm-svn: 106964
2010-06-26 22:40:52 +00:00
Chris Lattner 57ce97151f minor cleanup: don't emit the base of an array subscript until after
we're done diddling around with the index stuff.  Use a cheaper type
comparison.

llvm-svn: 106963
2010-06-26 22:40:46 +00:00
Chris Lattner 431bef4409 fix inc/dec to honor -fwrapv and -ftrapv, implementing PR7426.
llvm-svn: 106962
2010-06-26 22:18:28 +00:00
Chris Lattner 05dc78c096 move scalar inc/dec codegen into ScalarExprEmitter instead
of being in CGF.  No functionality change.

llvm-svn: 106961
2010-06-26 22:09:34 +00:00
Chris Lattner fa20e95043 use more efficient type comparison predicates.
llvm-svn: 106958
2010-06-26 21:52:32 +00:00
Chris Lattner 0bf27620f0 Fix unary minus to trap on overflow with -ftrapv, refactoring binop
code so we can use it from VisitUnaryMinus.

llvm-svn: 106957
2010-06-26 21:48:21 +00:00
Chris Lattner 51924e517b Implement support for -fwrapv, rdar://7221421
As part of this, pull together trapv handling into the same enum.

This also add support for NSW multiplies.

This also makes PCH disagreement on overflow behavior silent, since it
really doesn't matter except for warnings and codegen (no macros get 
defined etc).

llvm-svn: 106956
2010-06-26 21:25:03 +00:00
Chris Lattner 217e056e40 implement rdar://7432000 - signed negate should codegen as NSW.
While I'm in there, adjust pointer to member adjustments as well.

llvm-svn: 106955
2010-06-26 20:27:24 +00:00
Chris Lattner 30c924b3e8 Implement support for #pragma message, patch by Michael Spencer!
llvm-svn: 106950
2010-06-26 17:11:39 +00:00
Anders Carlsson 04775f8413 Change EmitReferenceBindingToExpr to take a decl instead of a boolean.
llvm-svn: 106949
2010-06-26 16:35:32 +00:00
Anders Carlsson 709ef8e46c Add function for mangling reference temporaries.
llvm-svn: 106948
2010-06-26 16:09:40 +00:00
Charles Davis f4db33cbdf Mangle pointer and (lvalue) reference types in the Microsoft C++ Mangler.
Also, fix mangling of throw specs. Turns out MSVC totally ignores throw
specs when mangling names.

llvm-svn: 106937
2010-06-26 03:50:05 +00:00
Bob Wilson f11a38dcce Add a missing dependency to try to fix a buildbot failure.
It complained with:

llvm[5]: Building Clang arm_neon.h.inc with tblgen
cp: cannot create regular file `/build/buildbot-llvm/clang-x86_64-linux-selfhost-rel/llvm.obj.2/Release/lib/clang/2.0/include/arm_neon.h': No such file or directory

llvm-svn: 106922
2010-06-26 00:03:23 +00:00
Ted Kremenek 58f61ec1de Relax assertion since non-pod C++ classes are not aggregates, but still can appear in this context.
llvm-svn: 106919
2010-06-25 23:51:38 +00:00
Jordy Rose c3bcc36a0b When a constant size array is casted to another type, its length should be scaled as well.
llvm-svn: 106911
2010-06-25 23:23:04 +00:00
Ted Kremenek abb1f91325 Use TypeSourceInfo to help determine the SourceRange of a CXXNewExpr. This fixes several
cases where we generated an invalid SourceRange for this expression.  Thanks to John McCall
for helping me figure this out.

llvm-svn: 106903
2010-06-25 22:48:49 +00:00
Ted Kremenek fe97a1ac65 Add "checker caching" to GRExprEngine::CheckerVisit to progressively build
a winowed list of checkers that actually do something for a given StmtClass.
As the number of checkers grows, this may potentially significantly reduce
the number of checkers called at any one time.  My own measurements show that
for the ~20 registered Checker objects, only ~5 of them respond at any one time
to a give statement.  While this isn't a net performance win right now (there
is a minor slowdown on sqlite.3) this improvement does greatly improve debugging
when stepping through the checkers used to evaluate a given statement.

llvm-svn: 106884
2010-06-25 20:59:31 +00:00
Ted Kremenek 76abf19ea6 Fix -analyze-display-progress (once again), this time with an additional regression test.
llvm-svn: 106883
2010-06-25 20:59:24 +00:00
Fariborz Jahanian b66b08ef01 Minor change to my last patch to fix PR7490.
llvm-svn: 106875
2010-06-25 20:01:13 +00:00
Eric Christopher 17c7b89054 Translate numbers properly.
llvm-svn: 106873
2010-06-25 19:04:52 +00:00
Fariborz Jahanian d5202e0926 IRGen for trivial initialization of dynamiccaly allocated
array of other done c++ objects. Fixes PR7490.

llvm-svn: 106869
2010-06-25 18:26:07 +00:00
Tom Care 375387d1f8 Change RegionStoreManager::Retrieve to infer the type of a symbolic region from the context when it is not already available.
llvm-svn: 106868
2010-06-25 18:22:31 +00:00
Daniel Dunbar 283fe3d07a build: Get CLANG_VERSION from Version.inc instead of depending on VER file directly.
llvm-svn: 106864
2010-06-25 17:33:49 +00:00
Argyrios Kyrtzidis b1d38e3f4a Support NonTypeTemplateParmDecl for PCH.
llvm-svn: 106860
2010-06-25 16:25:09 +00:00
Argyrios Kyrtzidis 03e5e0467c Make PCHWriter::FlushStmts() robust. If we added null Stmts, reading them back got messed up.
llvm-svn: 106859
2010-06-25 16:25:02 +00:00
Argyrios Kyrtzidis f0f7a792d7 Support DependentTemplateSpecializationType and ElaboratedType for PCH.
llvm-svn: 106858
2010-06-25 16:24:58 +00:00
Argyrios Kyrtzidis dc9ca0afa8 Add forgotten breaks in case statements.
llvm-svn: 106857
2010-06-25 16:24:51 +00:00
Argyrios Kyrtzidis 58e01ad26f Support UnresolvedLookupExpr for PCH.
llvm-svn: 106832
2010-06-25 09:03:34 +00:00
Argyrios Kyrtzidis b8d3c63820 Support UnresolvedMemberExpr for PCH.
llvm-svn: 106831
2010-06-25 09:03:26 +00:00