Commit Graph

85925 Commits

Author SHA1 Message Date
Jim Grosbach 7ea5fc0794 minor housekeeping cleanup: 80-column, trailing whitespace, spelling, etc.. No functional change.
llvm-svn: 106988
2010-06-28 04:27:01 +00:00
Chandler Carruth b6f991787b Suppress diagnosing access violations while looking up deallocation functions
much as we already do for allocation function lookup. Explicitly check access
for the function we actually select in one case that was previously missing,
but being caught behind the blanket diagnostics for all overload candidates.
This fixs PR7436.

llvm-svn: 106986
2010-06-28 00:30:51 +00:00
Devang Patel 81170d23de Do not forget last element, function, while creating Subprogram definition MDNode from subprogram declare MDNode.
llvm-svn: 106985
2010-06-27 21:04:31 +00:00
Rafael Espindola b1ef8ffb15 Use softfp for linux gnueabi, keep the warning for everything else.
llvm-svn: 106984
2010-06-27 18:29:21 +00:00
Anders Carlsson 3f48c603fb Correctly destroy reference temporaries with global storage. Remove ErrorUnsupported call when binding a global reference to a non-lvalue. Fixes PR7326.
llvm-svn: 106983
2010-06-27 17:52:15 +00:00
Anders Carlsson 18c205ecdf Add a CreateReferenceTemporary that will do the right thing for variables with global storage.
llvm-svn: 106982
2010-06-27 17:23:46 +00:00
Anders Carlsson 2969c8c69d Simplify CodeGenFunction::EmitReferenceBindingToExpr as a first step towards fixing PR7326.
llvm-svn: 106981
2010-06-27 16:56:04 +00:00
Anders Carlsson ca68d357d4 Reduce indentation.
llvm-svn: 106980
2010-06-27 15:24:55 +00:00
Chris Lattner 25a843fcd2 minor cleanup to SROA: when lowering type unsafe accesses to
large integers, the first inserted value would always create
an 'or X, 0'.  Even though this is trivially zapped by
instcombine, don't bother creating this pointless instruction.

llvm-svn: 106979
2010-06-27 07:58:26 +00:00
Chris Lattner 818efb64a3 misc tidying
llvm-svn: 106978
2010-06-27 07:40:06 +00:00
Chris Lattner 5e016ae983 finally get around to doing a significant cleanup to irgen:
have CGF create and make accessible standard int32,int64 and 
intptr types.  This fixes a ton of 80 column violations 
introduced by LLVMContextification and cleans up stuff a lot.

llvm-svn: 106977
2010-06-27 07:15:29 +00:00
Chris Lattner e000907e13 tidy up OrderGlobalInits
llvm-svn: 106976
2010-06-27 06:32:58 +00:00
Chris Lattner 055097f024 If coercing something from int or pointer type to int or pointer type
(potentially after unwrapping it from a struct) do it without going through
memory.  We now compile:

struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D) {
  return D.NumDecls;
}

into:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %coerce.val.ii = trunc i64 %0 to i32            ; <i32> [#uses=1]
  store i32 %coerce.val.ii, i32* %coerce.dive
  %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp1 = load i32* %tmp                          ; <i32> [#uses=1]
  ret i32 %tmp1
}

instead of:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to i32*                  ; <i32*> [#uses=1]
  %2 = load i32* %1, align 1                      ; <i32> [#uses=1]
  store i32 %2, i32* %coerce.dive
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

... which is quite a bit less terrifying.

llvm-svn: 106975
2010-06-27 06:26:04 +00:00
Chris Lattner 895c52ba8b Same patch as the previous on the store side. Before we compiled this:
struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D) {
  return D.NumDecls;
}

to:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to %struct.DeclGroup*    ; <%struct.DeclGroup*> [#uses=1]
  %2 = load %struct.DeclGroup* %1, align 1        ; <%struct.DeclGroup> [#uses=1]
  store %struct.DeclGroup %2, %struct.DeclGroup* %D
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

which caused fast isel bailouts due to the FCA load/store of %2.  Now
we generate this just blissful code:

%struct.DeclGroup = type { i32 }

define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone {
entry:
  %D = alloca %struct.DeclGroup, align 4          ; <%struct.DeclGroup*> [#uses=2]
  %tmp = alloca i64                               ; <i64*> [#uses=2]
  %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  store i64 %0, i64* %tmp
  %1 = bitcast i64* %tmp to i32*                  ; <i32*> [#uses=1]
  %2 = load i32* %1, align 1                      ; <i32> [#uses=1]
  store i32 %2, i32* %coerce.dive
  %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1]
  %tmp2 = load i32* %tmp1                         ; <i32> [#uses=1]
  ret i32 %tmp2
}

This avoids fastisel bailing out and is groundwork for future patch.
This reduces bailouts on CGStmt.ll to 911 from 935.

llvm-svn: 106974
2010-06-27 06:04:18 +00:00
Chris Lattner 1cd6698a7c improve CreateCoercedLoad a bit to generate slightly less awful
IR when handling X86-64 by-value struct stuff.  For example, we
use to compile this:

struct DeclGroup {
  unsigned NumDecls;
};

int foo(DeclGroup D);
void bar(DeclGroup *D) {
  foo(*D);
}

into:

define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) ssp nounwind {
entry:
  %D.addr = alloca %struct.DeclGroup*, align 8    ; <%struct.DeclGroup**> [#uses=2]
  %agg.tmp = alloca %struct.DeclGroup, align 4    ; <%struct.DeclGroup*> [#uses=2]
  %tmp3 = alloca i64                              ; <i64*> [#uses=2]
  store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
  %tmp = load %struct.DeclGroup** %D.addr         ; <%struct.DeclGroup*> [#uses=1]
  %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
  %tmp2 = bitcast %struct.DeclGroup* %tmp to i8*  ; <i8*> [#uses=1]
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
  %0 = bitcast i64* %tmp3 to %struct.DeclGroup*   ; <%struct.DeclGroup*> [#uses=1]
  %1 = load %struct.DeclGroup* %agg.tmp           ; <%struct.DeclGroup> [#uses=1]
  store %struct.DeclGroup %1, %struct.DeclGroup* %0, align 1
  %2 = load i64* %tmp3                            ; <i64> [#uses=1]
  call void @_Z3foo9DeclGroup(i64 %2)
  ret void
}

which would cause fastisel to bail out due to the first class aggregate load %1.  With
this patch we now compile it into the (still awful):

define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind ssp noredzone {
entry:
  %D.addr = alloca %struct.DeclGroup*, align 8    ; <%struct.DeclGroup**> [#uses=2]
  %agg.tmp = alloca %struct.DeclGroup, align 4    ; <%struct.DeclGroup*> [#uses=2]
  %tmp3 = alloca i64                              ; <i64*> [#uses=2]
  store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
  %tmp = load %struct.DeclGroup** %D.addr         ; <%struct.DeclGroup*> [#uses=1]
  %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
  %tmp2 = bitcast %struct.DeclGroup* %tmp to i8*  ; <i8*> [#uses=1]
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
  %coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <i32*> [#uses=1]
  %0 = bitcast i64* %tmp3 to i32*                 ; <i32*> [#uses=1]
  %1 = load i32* %coerce.dive                     ; <i32> [#uses=1]
  store i32 %1, i32* %0, align 1
  %2 = load i64* %tmp3                            ; <i64> [#uses=1]
  %call = call i32 @_Z3foo9DeclGroup(i64 %2) noredzone ; <i32> [#uses=0]
  ret void
}

which doesn't bail out.  On CGStmt.ll, this reduces fastisel bail outs from 958 to 935,
and is the precursor of better things to come.

llvm-svn: 106973
2010-06-27 05:56:15 +00:00
Jordy Rose 7f8ea4d677 Implicitly compare symbolic expressions to zero when they're being used as constraints. Part of PR7491.
llvm-svn: 106972
2010-06-27 01:20:56 +00:00
Chris Lattner e01d966ce2 merge two tests.
llvm-svn: 106971
2010-06-27 01:08:03 +00:00
Chris Lattner 3fcc790cd8 Change IR generation for return (in the simple case) to avoid doing silly
load/store nonsense in the epilog.  For example, for:

int foo(int X) {
  int A[100];
  return A[X];
}

we used to generate:

  %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]
  store i32 %tmp1, i32* %retval
  %0 = load i32* %retval                          ; <i32> [#uses=1]
  ret i32 %0
}

which codegen'd to this code:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	subq	$408, %rsp              ## imm = 0x198
	movl	%edi, 400(%rsp)
	movl	400(%rsp), %edi
	movslq	%edi, %rax
	movl	(%rsp,%rax,4), %edi
	movl	%edi, 404(%rsp)
	movl	404(%rsp), %eax
	addq	$408, %rsp              ## imm = 0x198
	ret

Now we generate:

  %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]
  ret i32 %tmp1
}

and:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	subq	$408, %rsp              ## imm = 0x198
	movl	%edi, 404(%rsp)
	movl	404(%rsp), %edi
	movslq	%edi, %rax
	movl	(%rsp,%rax,4), %eax
	addq	$408, %rsp              ## imm = 0x198
	ret

This actually does matter, cutting out 2000 lines of IR from CGStmt.ll 
for example.

Another interesting effect is that altivec.h functions which are dead
now get dce'd by the inliner.  Hence all the changes to 
builtins-ppc-altivec.c to ensure the calls aren't dead.

llvm-svn: 106970
2010-06-27 01:06:27 +00:00
Chris Lattner 0875802d0f add some named accessors for StoreInst
llvm-svn: 106969
2010-06-26 23:26:37 +00:00
Chris Lattner 8e17fa098e fit in 80 cols
llvm-svn: 106968
2010-06-26 23:26:22 +00:00
Chris Lattner 726b3d09cd reduce indentation
llvm-svn: 106967
2010-06-26 23:13:19 +00:00
Chris Lattner 6c5abe88bf Implement rdar://7530813 - collapse multiple GEP instructions in IRgen
This avoids generating two gep's for common array operations.  Before
we would generate something like:

  %tmp = load i32* %X.addr                        ; <i32> [#uses=1]
  %arraydecay = getelementptr inbounds [100 x i32]* %A, i32 0, i32 0 ; <i32*> [#uses=1]
  %arrayidx = getelementptr inbounds i32* %arraydecay, i32 %tmp ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]

Now we generate:

  %tmp = load i32* %X.addr                        ; <i32> [#uses=1]
  %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i32 %tmp ; <i32*> [#uses=1]
  %tmp1 = load i32* %arrayidx                     ; <i32> [#uses=1]

Less IR is better at -O0.

llvm-svn: 106966
2010-06-26 23:03:20 +00:00
Ted Kremenek f00eac5cff Allow '__extension__' to be analyzed in a lvalue context.
llvm-svn: 106964
2010-06-26 22:40:52 +00:00
Chris Lattner 57ce97151f minor cleanup: don't emit the base of an array subscript until after
we're done diddling around with the index stuff.  Use a cheaper type
comparison.

llvm-svn: 106963
2010-06-26 22:40:46 +00:00
Chris Lattner 431bef4409 fix inc/dec to honor -fwrapv and -ftrapv, implementing PR7426.
llvm-svn: 106962
2010-06-26 22:18:28 +00:00
Chris Lattner 05dc78c096 move scalar inc/dec codegen into ScalarExprEmitter instead
of being in CGF.  No functionality change.

llvm-svn: 106961
2010-06-26 22:09:34 +00:00
Chris Lattner 93e63a0218 this test is failing nondeterministically and blaming me, just disable
it for now.

llvm-svn: 106960
2010-06-26 22:08:30 +00:00
Benjamin Kramer c1ecfd86a3 Fix test weirdness.
llvm-svn: 106959
2010-06-26 22:06:50 +00:00
Chris Lattner fa20e95043 use more efficient type comparison predicates.
llvm-svn: 106958
2010-06-26 21:52:32 +00:00
Chris Lattner 0bf27620f0 Fix unary minus to trap on overflow with -ftrapv, refactoring binop
code so we can use it from VisitUnaryMinus.

llvm-svn: 106957
2010-06-26 21:48:21 +00:00
Chris Lattner 51924e517b Implement support for -fwrapv, rdar://7221421
As part of this, pull together trapv handling into the same enum.

This also add support for NSW multiplies.

This also makes PCH disagreement on overflow behavior silent, since it
really doesn't matter except for warnings and codegen (no macros get 
defined etc).

llvm-svn: 106956
2010-06-26 21:25:03 +00:00
Chris Lattner 217e056e40 implement rdar://7432000 - signed negate should codegen as NSW.
While I'm in there, adjust pointer to member adjustments as well.

llvm-svn: 106955
2010-06-26 20:27:24 +00:00
Benjamin Kramer 3bbc52ce3e Fix some tests that didn't test anything.
llvm-svn: 106954
2010-06-26 20:05:06 +00:00
Kenneth Uildriks 7228d98b85 Partial specialization test should not depend on the order of specialization operations or the names assigned to the specialized functions
llvm-svn: 106953
2010-06-26 18:47:40 +00:00
Rafael Espindola 2041abd958 When splitting a VAARG, remember its alignment.
This produces terrible but correct code.

llvm-svn: 106952
2010-06-26 18:22:20 +00:00
Bob Wilson 418e64a385 Revert my if-conversion cleanup since it caused a bunch of nightly test
regressions.

--- Reverse-merging r106939 into '.':
U    test/CodeGen/Thumb2/thumb2-ifcvt3.ll
U    lib/CodeGen/IfConversion.cpp

llvm-svn: 106951
2010-06-26 17:47:06 +00:00
Chris Lattner 30c924b3e8 Implement support for #pragma message, patch by Michael Spencer!
llvm-svn: 106950
2010-06-26 17:11:39 +00:00
Anders Carlsson 04775f8413 Change EmitReferenceBindingToExpr to take a decl instead of a boolean.
llvm-svn: 106949
2010-06-26 16:35:32 +00:00
Anders Carlsson 709ef8e46c Add function for mangling reference temporaries.
llvm-svn: 106948
2010-06-26 16:09:40 +00:00
Duncan Sands 3a5cb69cb8 Fix PR7328: when turning a tail recursion into a loop, need to preserve
the returned value after the tail call if it differs from other return
values.  The optimal thing to do would be to introduce a phi node for
the return value, but for the moment just fix the miscompile.

llvm-svn: 106947
2010-06-26 12:53:31 +00:00
Gabor Greif 7d4038dd88 use ArgOperand API
llvm-svn: 106946
2010-06-26 12:17:21 +00:00
Gabor Greif c2ac8c4261 use ArgOperand API
llvm-svn: 106945
2010-06-26 12:09:10 +00:00
Gabor Greif 83205af3fa use ArgOperand API
llvm-svn: 106944
2010-06-26 11:51:52 +00:00
Benjamin Kramer a000002428 VNInfos don't need to be destructed anymore.
llvm-svn: 106943
2010-06-26 11:30:59 +00:00
Gabor Greif e9afee2910 resort to ArgOperand API
llvm-svn: 106942
2010-06-26 09:35:09 +00:00
Eli Friedman b9bdc5a52d Remove bogus test.
llvm-svn: 106941
2010-06-26 04:59:56 +00:00
Eli Friedman 8cfa7713e9 Followup to r106770: actually generate SXTB and SXTH for sign-extensions.
llvm-svn: 106940
2010-06-26 04:36:50 +00:00
Bob Wilson c72da6bb56 Clean up some problems with extra CFG edges being introduced during
if-conversion.  The RemoveExtraEdges function doesn't work for blocks that
end with unanalyzable branches, so in those cases, the "extra" edges must
be explicitly removed.  The CopyAndPredicateBlock and MergeBlocks methods
can also avoid copying successor edges due to branches that have already
been removed.  The latter case is especially helpful when MergeBlocks is
called for handling "diamond" if-conversions, where otherwise you can end
up with some weird intermediate states in the CFG.  Unfortunately I've
been unable to find cases where this cleanup actually makes a significant
difference in the code.  There is one test where we manage to remove an
empty block at the end of a function.  Radar 6911268.

llvm-svn: 106939
2010-06-26 04:27:33 +00:00
Bob Wilson 0248da9db4 Add support for encoding NEON VMOV (from scalar to core register) instructions.
llvm-svn: 106938
2010-06-26 04:07:15 +00:00
Charles Davis f4db33cbdf Mangle pointer and (lvalue) reference types in the Microsoft C++ Mangler.
Also, fix mangling of throw specs. Turns out MSVC totally ignores throw
specs when mangling names.

llvm-svn: 106937
2010-06-26 03:50:05 +00:00