Commit Graph

129 Commits

Author SHA1 Message Date
Nick Lewycky 2eb3ade089 Fix unused value warning for value used only in assert.
llvm-svn: 146440
2011-12-12 22:59:34 +00:00
Chandler Carruth d4a02403b3 Don't rely in there being one argument before we've actually identified
a function to upgrade. Also, simplify the code a bit at the expense of
one line.

llvm-svn: 146368
2011-12-12 10:57:20 +00:00
Chandler Carruth 58a71ed339 Switch llvm.cttz and llvm.ctlz to accept a second i1 parameter which
indicates whether the intrinsic has a defined result for a first
argument equal to zero. This will eventually allow these intrinsics to
accurately model the semantics of GCC's __builtin_ctz and __builtin_clz
and the X86 instructions (prior to AVX) which implement them.

This patch merely sets the stage by extending the signature of these
intrinsics and establishing auto-upgrade logic so that the old spelling
still works both in IR and in bitcode. The upgrade logic preserves the
existing (inefficient) semantics. This patch should not change any
behavior. CodeGen isn't updated because it can use the existing
semantics regardless of the flag's value.

Note that this will be followed by API updates to Clang and DragonEgg.

Reviewed by Nick Lewycky!

llvm-svn: 146357
2011-12-12 04:26:04 +00:00
Chris Lattner 0bcbde46e2 Eli managed to kill off llvm.membarrier in llvm 3.0 also, this means
that mainline needs no autoupgrade logic for intrinsics yet, woohoo!

llvm-svn: 145178
2011-11-27 08:42:07 +00:00
Chris Lattner 410f3d7f5d The llvm.atomic intrinsics *were* removed in LLVM 3.0 (in r141333), remove the
autoupgrade logic for 2.9 and before.

llvm-svn: 145176
2011-11-27 08:18:55 +00:00
Chris Lattner ee471c484a remove autoupgrade support for old forms of llvm.prefetch and the old
trampoline forms.  Both of these were correct in LLVM 3.0, and we don't
need to support LLVM 2.9 and earlier in mainline.

llvm-svn: 145174
2011-11-27 07:42:04 +00:00
Chris Lattner 90ef78c07f remove autoupgrade support for really old-style debug info intrinsics.
I think this is the last of autoupgrade that can be removed in 3.1.
Can the atomic upgrade stuff also go?

llvm-svn: 145169
2011-11-27 06:18:33 +00:00
Chris Lattner 6aa6c0c3b7 remove some old autoupgrade logic
llvm-svn: 145167
2011-11-27 06:10:54 +00:00
Chris Lattner db89153969 remove autoupgrade support for LLVM 2.9 exception stuff. Mainline supports
LLVM 3.0 and later.

llvm-svn: 145165
2011-11-27 05:56:16 +00:00
Eli Friedman 1456cd20b4 Remove the old atomic instrinsics. autoupgrade functionality is included with this patch.
llvm-svn: 141333
2011-10-06 23:20:49 +00:00
Duncan Sands a098436b32 Split the init.trampoline intrinsic, which currently combines GCC's
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC.  While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function.  To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function.  Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!).  Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC.  Patch mostly by Sanjoy Das.

llvm-svn: 139140
2011-09-06 13:37:06 +00:00
Bill Wendling acaad83cd0 The insertion point for the loads is right before the llvm.eh.exception
call. The call may be in the same BB as the landingpad instruction. If that's
the case, then inserting the loads after the landingpad inst, but before the
extractvalues, causes undefined behavior.

llvm-svn: 139088
2011-09-04 09:02:18 +00:00
Bill Wendling 7c1d6358a2 Don't reload the values that are already there. The llvm.eh.resume uses the same
values that the resume instruction uses.
PR10850

llvm-svn: 139076
2011-09-03 01:38:17 +00:00
Bill Wendling 73e6333ce1 No need to get fancy inserting a PHI node when the values are stored in stack
slots. This fixes a bug where the number of nodes coming into the PHI node may
not equal the number of predecessors. E.g., two or more landingpad instructions
may require a PHI before reaching the eh.exception and eh.selector instructions.

llvm-svn: 139035
2011-09-02 21:17:08 +00:00
Bill Wendling 5b49bb6bf5 Perform the upgrading of the old EH to the new EH in a more sane manner.
Perform the upgrading in steps.

* First, create a map of the invokes to the EH intrinsics.

* Next, take that mapping and determine if the invoke's unwind destination has a
  single predecessor. If not, then create a new empty block to hold the new
  landingpad instruction.

* Create a landingpad instruction into the uwnind destination. Fill it with the
  values from the old selector. Map the old intrinsic calls to the new
  landingpad values (there may be multiple landingpad instructions per instrinic
  call pairs).

* Go through the old intrinsic calls, create a PHI node when necessary, and then
  replace their values with the new values from the landingpad instructions.

* Delete all dead instructions.

* ???

* Profit!

llvm-svn: 138990
2011-09-02 01:30:08 +00:00
Bill Wendling 032c60c1a0 Only delete instructions once.
llvm-svn: 138700
2011-08-27 06:10:02 +00:00
Bill Wendling 45449b1cba Initial check in that will auto-upgrade the old EH scheme to the new EH scheme.
This upgrade suffers from the problems of the old EH scheme - i.e., that the
calls to llvm.eh.exception() and llvm.eh.selector() can wander off and get
lost. It makes a valiant effort to reclaim these little lost lambs.

This is a first draft, so it hasn't yet been hooked up to the parser.

llvm-svn: 138602
2011-08-25 23:22:40 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Jay Foad 5bd375a6cc Convert CallInst and InvokeInst APIs to use ArrayRef.
llvm-svn: 135265
2011-07-15 08:37:34 +00:00
Chris Lattner b372f66b62 rework the remaining autoupgrade logic to use a StringRef instead of creating a
temporary std::string for every function being checked.

llvm-svn: 133355
2011-06-18 18:56:39 +00:00
Chris Lattner 80ed9dc9e5 rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which is
for pre-2.9 bitcode files.  We keep x86 unaligned loads, movnt, crc32, and the
target indep prefetch change.

As usual, updating the testsuite is a PITA.

llvm-svn: 133337
2011-06-18 06:05:24 +00:00
Bruno Cardoso Lopes dc9ff3a4b1 Add one more argument to the prefetch intrinsic to indicate whether it's a data
or instruction cache access. Update the targets to match it and also teach
autoupgrade.

llvm-svn: 132976
2011-06-14 04:58:37 +00:00
Chad Rosier 3252177f16 CRC32 intrinsics were renamed at revision 132163. This submission
fixes aliasing issues with the old and new names as well as adds test
cases for the auto-upgrader.
Fixes rdar 9472944.

llvm-svn: 132207
2011-05-27 19:38:10 +00:00
Chad Rosier b362884ca9 Renamed llvm.x86.sse42.crc32 intrinsics; crc64 doesn't exist.
crc32.[8|16|32] have been renamed to .crc32.32.[8|16|32] and
crc64.[8|16|32] have been renamed to .crc32.64.[8|64].

llvm-svn: 132163
2011-05-26 23:13:19 +00:00
Bill Wendling db0996c822 Replace the "movnt" intrinsics with a native store + nontemporal metadata bit.
<rdar://problem/8460511>

llvm-svn: 130791
2011-05-03 21:11:17 +00:00
Bill Wendling b902f1dd88 Reapply r129401 with patch for clang.
llvm-svn: 129419
2011-04-13 00:36:11 +00:00
Bill Wendling dbfde42468 Revert r129401 for now. Clang is using the old way of doing things.
llvm-svn: 129403
2011-04-12 22:59:27 +00:00
Bill Wendling 47c24875a1 Remove the unaligned load intrinsics in favor of using native unaligned loads.
Now that we have a first-class way to represent unaligned loads, the unaligned
load intrinsics are superfluous.

First part of <rdar://problem/8460511>.

llvm-svn: 129401
2011-04-12 22:46:31 +00:00
Bill Wendling c73eda1e89 Remove dead code.
llvm-svn: 128519
2011-03-30 01:03:48 +00:00
Evan Cheng 18381b4257 Add intrinsics @llvm.arm.neon.vmulls and @llvm.arm.neon.vmullu.* back. Frontends
was lowering them to sext / uxt + mul instructions. Unfortunately the
optimization passes may hoist the extensions out of the loop and separate them.
When that happens, the long multiplication instructions can be broken into
several scalar instructions, causing significant performance issue.

Note the vmla and vmls intrinsics are not added back. Frontend will codegen them
as intrinsics vmull* + add / sub. Also note the isel optimizations for catching
mul + sext / zext are not changed either.

First part of rdar://8832507, rdar://9203134

llvm-svn: 128502
2011-03-29 23:06:19 +00:00
Chris Lattner 69229316aa convert ConstantVector::get to use ArrayRef.
llvm-svn: 125537
2011-02-15 00:14:00 +00:00
Chris Lattner 34442e6ebf revert my ConstantVector patch, it seems to have made the llvm-gcc
builders unhappy.

llvm-svn: 125504
2011-02-14 18:15:46 +00:00
Chris Lattner d9f5b88548 Switch ConstantVector::get to use ArrayRef instead of a pointer+size
idiom.  Change various clients to simplify their code.

llvm-svn: 125487
2011-02-14 07:55:32 +00:00
Bill Wendling 402e54822b The pshufw instruction came about in MMX2 when SSE was introduced. Don't place
it in with the SSSE3 instructions.

Steward! Could you place this chair by the aft sun deck? I'm trying to get away
from the Astors. They are such boors!

llvm-svn: 115552
2010-10-04 20:24:01 +00:00
Dale Johannesen dd224d2333 Massive rewrite of MMX:
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.

Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics. 

MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces.  Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.

The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.

llvm-svn: 115243
2010-09-30 23:57:10 +00:00
Bill Wendling 55165fed5d Use StringRef which performs the "early exit" when compared against a constant
string.

llvm-svn: 113615
2010-09-10 20:42:26 +00:00
Bill Wendling 6a57e249df Early exit with simple checks.
llvm-svn: 113603
2010-09-10 19:06:58 +00:00
Bill Wendling e26fffc597 Auto-upgrade the magic ".llvm.eh.catch.all.value" global to
"llvm.eh.catch.all.value". Only the name needs to be changed.

llvm-svn: 113600
2010-09-10 18:51:56 +00:00
Bob Wilson f65c9ef720 Replace NEON vabdl, vaba, and vabal intrinsics with combinations of the
vabd intrinsic and add and/or zext operations.  In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests.  Auto-upgrade the old intrinsics.

llvm-svn: 112941
2010-09-03 01:35:08 +00:00
Bob Wilson 38ab35a911 Remove NEON vmull, vmlal, and vmlsl intrinsics, replacing them with multiply,
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests.  Add auto-upgrade support for the old intrinsics.

llvm-svn: 112773
2010-09-01 23:50:19 +00:00
Bob Wilson 4cd8a126c3 Remove NEON vmovn intrinsic, replacing it with vector truncate operations.
Auto-upgrade the old intrinsic and update tests.

llvm-svn: 112507
2010-08-30 20:02:30 +00:00
Bob Wilson d0c054886c Remove NEON vaddl, vaddw, vsubl, and vsubw intrinsics. Instead, use llvm
IR add/sub operations with one or both operands sign- or zero-extended.
Auto-upgrade the old intrinsics.

llvm-svn: 112416
2010-08-29 05:57:34 +00:00
Bob Wilson edf722add3 Add alignment arguments to all the NEON load/store intrinsics.
Update all the tests using those intrinsics and add support for
auto-upgrading bitcode files with the old versions of the intrinsics.

llvm-svn: 112271
2010-08-27 17:13:24 +00:00
Bob Wilson 9a511c07e4 Replace the arm.neon.vmovls and vmovlu intrinsics with vector sign-extend and
zero-extend operations.

llvm-svn: 111614
2010-08-20 04:54:02 +00:00
Gabor Greif 3e44ea1917 undo 80 column trespassing I caused
llvm-svn: 109092
2010-07-22 10:37:47 +00:00
Gabor Greif eab748d409 use ArgOperand API
llvm-svn: 107145
2010-06-29 16:17:26 +00:00
Gabor Greif e54065394e use helper to neatly access arguments
llvm-svn: 106622
2010-06-23 08:45:32 +00:00
Gabor Greif c89d2aad4c use high-level accessors
llvm-svn: 106573
2010-06-22 20:40:38 +00:00
Eric Christopher 64831c6a4c Remove the palignr intrinsics now that we lower them to vector shuffles,
shifts and null vectors. Autoupgrade these to what we'd lower them to.

Add a testcase to exercise this.

llvm-svn: 101851
2010-04-20 00:59:54 +00:00
Eric Christopher 7258dcd77f Revert 101465, it broke internal OpenGL testing.
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.

llvm-svn: 101579
2010-04-16 23:37:20 +00:00