Commit Graph

360 Commits

Author SHA1 Message Date
Eli Friedman c9a551ebed LangRef and basic memory-representation/reading/writing for 'cmpxchg' and
'atomicrmw' instructions, which allow representing all the current atomic
rmw intrinsics.

The allowed operands for these instructions are heavily restricted at the
moment; we can probably loosen it a bit, but supporting general
first-class types (where it makes sense) might get a bit complicated,
given how SelectionDAG works.

As an initial cut, these operations do not support specifying an alignment,
but it would be possible to add if we think it's useful. Specifying an
alignment lower than the natural alignment would be essentially
impossible to support on anything other than x86, but specifying a greater
alignment would be possible.  I can't think of any useful optimizations which
would use that information, but maybe someone else has ideas.

Optimizer/codegen support coming soon.

llvm-svn: 136404
2011-07-28 21:48:00 +00:00
Bill Wendling 4f027233d2 The personality function should be a Function* and not just a Value*.
llvm-svn: 136392
2011-07-28 21:14:13 +00:00
Bill Wendling d8c1c1f23c Don't add in the asked for size so that we don't copy too much from the old to new vectors.
llvm-svn: 136338
2011-07-28 07:26:41 +00:00
Bill Wendling 4c93488999 Make sure that the landingpad instruction takes a Constant* as the clause's value.
llvm-svn: 136326
2011-07-28 02:27:12 +00:00
Bill Wendling a8f04e3034 Add a couple of convenience functions:
* InvokeInst: Get the landingpad instruction associated with this invoke.
* LandingPadInst: A method to reserve extra space for clauses.

llvm-svn: 136325
2011-07-28 02:15:52 +00:00
Bill Wendling 6c923bb8d9 Merge the contents from exception-handling-rewrite to the mainline.
This adds the new instructions 'landingpad' and 'resume'.

llvm-svn: 136253
2011-07-27 20:18:04 +00:00
Eli Friedman fee02c6c13 Initial implementation of 'fence' instruction, the new C++0x-style replacement for llvm.memory.barrier.
This is just a LangRef entry and reading/writing/memory representation; optimizer+codegen support coming soon.

llvm-svn: 136009
2011-07-25 23:16:38 +00:00
Jay Foad d1b7849d49 Convert GetElementPtrInst to use ArrayRef.
llvm-svn: 135904
2011-07-25 09:48:08 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Jay Foad 5bd375a6cc Convert CallInst and InvokeInst APIs to use ArrayRef.
llvm-svn: 135265
2011-07-15 08:37:34 +00:00
Jay Foad 57aa636794 Convert InsertValueInst and ExtractValueInst APIs to use ArrayRef.
llvm-svn: 135040
2011-07-13 10:26:04 +00:00
Chris Lattner b1ed91f397 Land the long talked about "type system rewrite" patch. This
patch brings numerous advantages to LLVM.  One way to look at it
is through diffstat:
 109 files changed, 3005 insertions(+), 5906 deletions(-)

Removing almost 3K lines of code is a good thing.  Other advantages
include:

1. Value::getType() is a simple load that can be CSE'd, not a mutating
   union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
   uniques them.  This means that the compiler doesn't merge them structurally
   which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
   struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
   in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead 
   "const Type *" everywhere.

Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.  
"LLVM 3.0" is the right time to do this.

There are still some cleanups pending after this, this patch is large enough
as-is.

llvm-svn: 134829
2011-07-09 17:41:24 +00:00
Jay Foad 61ea0e4692 Reinstate r133513 (reverted in r133700) with an additional fix for a
-Wshorten-64-to-32 warning in Instructions.h.

llvm-svn: 133708
2011-06-23 09:09:15 +00:00
Eric Christopher 96513120b7 Revert r133513:
"Reinstate r133435 and r133449 (reverted in r133499) now that the clang
self-hosted build failure has been fixed (r133512)."

Due to some additional warnings.

llvm-svn: 133700
2011-06-23 06:24:52 +00:00
Jay Foad a97a2c998e Reinstate r133435 and r133449 (reverted in r133499) now that the clang
self-hosted build failure has been fixed (r133512).

llvm-svn: 133513
2011-06-21 10:33:19 +00:00
Chad Rosier 184f3b37e2 Revert r133435 and r133449 to appease buildbots.
llvm-svn: 133499
2011-06-21 02:09:03 +00:00
Jay Foad 61dbf7730a Fix a check for PHINodes with two incoming values.
llvm-svn: 133449
2011-06-20 17:46:19 +00:00
Jay Foad e03c05c35a Change how PHINodes store their operands.
Change PHINodes to store simple pointers to their incoming basic blocks,
instead of full-blown Uses.

Note that this loses an optimization in SplitCriticalEdge(), because we
can no longer walk the use list of a BasicBlock to find phi nodes. See
the comment I removed starting "However, the foreach loop is slow for
blocks with lots of predecessors".

Extend replaceAllUsesWith() on a BasicBlock to also update any phi
nodes in the block's successors. This mimics what would have happened
when PHINodes were proper Users of their incoming blocks. (Note that
this only works if OldBB->replaceAllUsesWith(NewBB) is called when
OldBB still has a terminator instruction, so it still has some
successors.)

llvm-svn: 133435
2011-06-20 14:38:01 +00:00
Duncan Sands 27bd0df352 Now that SrcBits and DestBits always represent the primitive size, rather
than either the primitive size or the element primitive size (in the case
of vectors), simplify the vector logic.  No functionality change.  There
is some distracting churn in the patch because I lined up comments better
while there - sorry about that.

llvm-svn: 131533
2011-05-18 10:59:25 +00:00
Duncan Sands 7f64656d21 Tighten up checking of the validity of casts. (1) The IR parser would
happily accept things like "sext <2 x i32> to <999 x i64>".  It would
also accept "sext <2 x i32> to i64", though the verifier would catch
that later.  Fixed by having castIsValid check that vector lengths match
except when doing a bitcast.  (2) When creating a cast instruction, check
that the cast is valid (this was already done when creating constexpr
casts).  While there, replace getScalarSizeInBits (used to allow more
vector casts) with getPrimitiveSizeInBits in getCastOpcode and isCastable
since vector to vector casts are now handled explicitly by passing to the
element types; i.e. this bit should result in no functional change.

llvm-svn: 131532
2011-05-18 09:21:57 +00:00
Duncan Sands a8514535a4 Teach getCastOpcode about element-by-element vector casts. For example, "trunc"
can be used to turn a <4 x i64> into a <4 x i32> but getCastOpcode would assert
if you passed these types to it.  Note that this strictly extends the previous
functionality: if getCastOpcode previously accepted two vector types (i.e. didn't
assert) then it still will and returns the same opcode (BitCast).  That's because
before it would only accept vectors with the same bitwidth, and the new code only
touches vectors with the same length.  However if two vectors have both the same
bitwidth and the same length then their element types have the same bitwidth, so
the new logic will return BitCast as before.

llvm-svn: 131530
2011-05-18 07:13:41 +00:00
Jay Foad 29426e87c1 Phi nodes always use an even number of operands, so don't ever allocate
an odd number.

llvm-svn: 129270
2011-04-11 09:25:51 +00:00
Jay Foad e98f29df8b Various Instructions' resizeOperands() methods are only used to grow the
list of operands. Simplify and rename them accordingly.

llvm-svn: 128708
2011-04-01 08:00:58 +00:00
Duncan Sands 2d3cdd686b While testing dragonegg I noticed that isCastable and getCastOpcode
had gotten out of sync: isCastable didn't think it was possible to
cast the x86_mmx type to anything, while it did think it possible
to cast an i64 to x86_mmx.

llvm-svn: 128705
2011-04-01 03:34:54 +00:00
Chris Lattner 35315d065b enhance vmcore to know that udiv's can be exact, and add a trivial
instcombine xform to exercise this.

Nothing forms exact udivs yet though.  This is progress on PR8862

llvm-svn: 124992
2011-02-06 21:44:57 +00:00
Jay Foad 142777224c Make SwitchInst::removeCase() more efficient.
llvm-svn: 124659
2011-02-01 09:22:34 +00:00
Jay Foad bbb91f2b22 Simplify the construction and destruction of Uses. Simplify
User::dropHungOffUses().

llvm-svn: 123580
2011-01-16 15:30:52 +00:00
Jay Foad 1d4a8fe156 Remove casts between Value** and Constant**, which won't work if a
static_cast from Constant* to Value* has to adjust the "this" pointer.
This is groundwork for PR889.

llvm-svn: 123435
2011-01-14 08:07:43 +00:00
Jay Foad d81f3c9659 Simplify the allocation and freeing of Users' operand lists, now that
every BranchInst has a fixed number of operands.

llvm-svn: 123027
2011-01-07 20:29:02 +00:00
Duncan Sands 95c4eccbe9 These methods should be "const"; make them so.
llvm-svn: 122809
2011-01-04 12:52:29 +00:00
Jeffrey Yasskin 9b43f33620 Change all self assignments X=X to (void)X, so that we can turn on a
new gcc warning that complains on self-assignments and
self-initializations.

llvm-svn: 122458
2010-12-23 00:58:24 +00:00
Frits van Bommel 16ebe77be0 Fix PR 4170 by having ExtractValueInst::getIndexedType() reject out-of-bounds indexing.
Also add asserts that the indices are valid in InsertValueInst::init(). ExtractValueInst already asserts when constructed with invalid indices.

llvm-svn: 120956
2010-12-05 20:50:26 +00:00
Chris Lattner baf00151ba fix PR8613 - Copy constructor of SwitchInst does not call SwitchInst::init
llvm-svn: 119463
2010-11-17 05:41:46 +00:00
Duncan Sands 7412f6e53d Fix a layering violation: hasConstantValue, which is part of the PHINode
class, uses DominatorTree which is an analysis.  This change moves all of
the tricky hasConstantValue logic to SimplifyInstruction, and replaces it
with a very simple literal implementation.  I already taught users of
hasConstantValue that need tricky stuff to use SimplifyInstruction instead.
I didn't update InlineFunction because the IR looks like it might be in a
funky state at the point it calls hasConstantValue, which makes calling
SimplifyInstruction dangerous since it can in theory do a lot of tricky
reasoning.  This may be a pessimization, for example in the case where
all phi node operands are either undef or a fixed constant.

llvm-svn: 119459
2010-11-17 04:30:22 +00:00
Duncan Sands b99f39b9f6 If dom tree information is available, make it possible to pass
it to get better phi node simplification.

llvm-svn: 119055
2010-11-14 18:36:10 +00:00
Bill Wendling 3c70417883 Cleanup. Get rid of extraneous variable.
llvm-svn: 115453
2010-10-03 00:46:06 +00:00
Dale Johannesen dec83c3fd9 Attempt to outwit overly smart compiler.
llvm-svn: 115251
2010-10-01 00:21:24 +00:00
Dale Johannesen dd224d2333 Massive rewrite of MMX:
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.

Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics. 

MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces.  Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.

The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.

llvm-svn: 115243
2010-09-30 23:57:10 +00:00
Dan Gohman 9a1b8598ef Make this code 65-bit clean.
llvm-svn: 114828
2010-09-27 15:15:44 +00:00
Nate Begeman 60a31c30c3 Move some code from Verifier into SVI::isValidOperands. This allows us to catch bad shufflevector operations when they are created, rather than waiting for someone to notice later on.
llvm-svn: 110986
2010-08-13 00:16:46 +00:00
Gabor Greif 638c823211 remove the private hack from CallInst, it was not supposed to hit the branch anyway
as a positive consequence the CallSite::getCallee() methods now can be rewritten to be
a bit more efficient

llvm-svn: 110380
2010-08-05 21:25:49 +00:00
Dan Gohman a7e5a24093 Define a maximum supported alignment value for load, store, and
alloca instructions (constrained by their internal encoding),
and add error checking for it. Fix an instcombine bug which
generated huge alignment values (null is infinitely aligned).
This fixes undefined behavior noticed by John Regehr.

llvm-svn: 109643
2010-07-28 20:12:04 +00:00
Gabor Greif 6d673953e3 eliminate CallInst::ArgOffset
llvm-svn: 108522
2010-07-16 09:38:02 +00:00
Duncan Sands 41b4a6b36a Convert some tab stops into spaces.
llvm-svn: 108130
2010-07-12 08:16:59 +00:00
Chris Lattner 25eea4db66 fix PR7311 by avoiding breaking casts when a bitcast from scalar->vector
is involved.

llvm-svn: 108117
2010-07-12 01:19:22 +00:00
Chris Lattner 601e390a3b make the prototypes for CreateMalloc and CreateFree more consistent. Patch
by Hans Vandierendonck from PR7605

llvm-svn: 108116
2010-07-12 00:57:28 +00:00
Gabor Greif 9dc154bcb4 reformulate CallSite::getCallee to adapt to CallInst::ArgOffset, and make it work even if CallInst::op_* are private
llvm-svn: 107390
2010-07-01 10:41:37 +00:00
Gabor Greif eec74583ca encode operand initializations (at fixed index)
in terms of Op<> and ArgOffset. This works for
values of {0, 1} for ArgOffset.
Please note that ArgOffset will become 0 soon and
will go away eventually.

llvm-svn: 107129
2010-06-29 11:41:38 +00:00
Dan Gohman dd41bba517 Use A.append(...) instead of A.insert(A.end(), ...) when A is a
SmallVector, and other SmallVector simplifications.

llvm-svn: 106452
2010-06-21 19:47:52 +00:00
Dan Gohman 0d7f3b8195 Split the logic behind CastInst::isNoopCast into a separate static function,
as is done with most other cast opcode predicates.

llvm-svn: 105008
2010-05-28 21:41:37 +00:00