Commit Graph

269 Commits

Author SHA1 Message Date
Richard Smith 7b553f1b19 Rename Expr::Evaluate to Expr::EvaluateAsRValue to make it clear that it will
implicitly perform an lvalue-to-rvalue conversion if used on an lvalue
expression. Also improve the documentation of Expr::Evaluate* to indicate which
of them will accept expressions with side-effects.

llvm-svn: 143263
2011-10-29 00:50:52 +00:00
Eli Friedman df14b3a837 Initial implementation of __atomic_* (everything except __atomic_is_lock_free).
llvm-svn: 141632
2011-10-11 02:20:01 +00:00
Richard Smith caf3390d44 Constant expression evaluation refactoring:
- Remodel Expr::EvaluateAsInt to behave like the other EvaluateAs* functions,
   and add Expr::EvaluateKnownConstInt to capture the current fold-or-assert
   behaviour.
 - Factor out evaluation of bitfield bit widths.
 - Fix a few places which would evaluate an expression twice: once to determine
   whether it is a constant expression, then again to get the value.

llvm-svn: 141561
2011-10-10 18:28:20 +00:00
Eli Friedman c8b57f6683 llvm.memory.barrier is going away; remove the wrapper intrinsic __builtin_llvm_memory_barrier.
__atomic_thread_fence will be landing soon as a replacement, wrapping around the new fence instruction.

llvm-svn: 141332
2011-10-06 23:12:03 +00:00
Benjamin Kramer 76399eb2ad de-tmpify clang.
llvm-svn: 140637
2011-09-27 21:06:10 +00:00
David Blaikie 83d382b1ca Switch assert(0/false) llvm_unreachable.
llvm-svn: 140367
2011-09-23 05:06:16 +00:00
Eli Friedman 2dadd3ebee Fix comment.
llvm-svn: 139678
2011-09-14 00:52:45 +00:00
John McCall 30e4efd458 Correctly generate IR for casted "builtin" functions, where
the builtin is really just a predefined declaration.  These are
totally valid to cast.

llvm-svn: 139657
2011-09-13 23:05:03 +00:00
Eli Friedman 84d2812111 Re-commit r139643.
Make clang use Acquire loads and Release stores where necessary.

llvm-svn: 139650
2011-09-13 22:21:56 +00:00
Eli Friedman acca089617 Revert r139643 while I look into it; it's breaking selfhost.
llvm-svn: 139648
2011-09-13 22:08:16 +00:00
Eli Friedman f92b2e0714 Make clang use Acquire loads and Release stores where necessary.
llvm-svn: 139643
2011-09-13 21:31:32 +00:00
Julien Lerouge e0d5fad37b Remove trailing } in comment.
llvm-svn: 139424
2011-09-09 22:46:39 +00:00
Julien Lerouge 5a6b6987dc Bring llvm.annotation* intrinsics support back to where it was in llvm-gcc: can
annotate global, local variables, struct fields, or arbitrary statements (using
the __builtin_annotation), rdar://8037476.

llvm-svn: 139423
2011-09-09 22:41:49 +00:00
Eli Friedman e9f8113ec4 Switch clang over to using fence/atomicrmw/cmpxchg instead of the intrinsics (which will go away). LLVM CodeGen does almost exactly the same thing with these and the old intrinsics, so I'm reasonably confident this will not break anything.
There are still a few issues which need to be resolved with code generation for atomic load and store, so I'm not converting the places which need those for now.

I'm not entirely sure what to do about __builtin_llvm_memory_barrier: the fence instruction doesn't expose all the possibilities which can be expressed by __builtin_llvm_memory_barrier.  I would appreciate hearing from anyone who is using this intrinsic.

llvm-svn: 139216
2011-09-07 01:41:24 +00:00
Ted Kremenek c14efa7122 Fix a handful of dead stores found by Clang's static analyzer. There's a bunch of others I haven't touched.
llvm-svn: 137867
2011-08-17 21:04:19 +00:00
Bob Wilson 445c24f8f0 Move handling of vget_lane/vset_lane before the code that checks the type.
Unlike most of the other Neon intrinsics, these are not overloaded and do not
have the extra argument that specifies the vector type.  This has not been
fatal because the lane number operand is supposed to be an ICE and so that
value has harmlessly been used as the type identifier.  Radar 9901281.

llvm-svn: 137550
2011-08-13 05:03:46 +00:00
Jay Foad 5709f7c5f7 Remove some unnecessary single element array temporaries.
llvm-svn: 136461
2011-07-29 13:56:53 +00:00
Frits van Bommel ede0dc6dda Shorten some expressions by using ArrayRef::slice().
llvm-svn: 135910
2011-07-25 15:13:01 +00:00
Chris Lattner 0e62c1cc0b remove unneeded llvm:: namespace qualifiers on some core types now that LLVM.h imports
them into the clang namespace.

llvm-svn: 135852
2011-07-23 10:55:15 +00:00
Frits van Bommel 717d7edd3e Migrate LLVM and Clang to use the new makeArrayRef(...) functions where previously explicit non-default constructors were used.
Mostly mechanical with some manual reformatting.

llvm-svn: 135390
2011-07-18 12:00:32 +00:00
Chris Lattner 2192fe50da de-constify llvm::Type, patch by David Blaikie!
llvm-svn: 135370
2011-07-18 04:24:23 +00:00
Jay Foad 5bd375a6cc Convert CallInst and InvokeInst APIs to use ArrayRef.
llvm-svn: 135265
2011-07-15 08:37:34 +00:00
Benjamin Kramer 8d375cef55 Change intrinsic getter to take an ArrayRef, now that the underlying function in LLVM does.
llvm-svn: 135155
2011-07-14 17:45:50 +00:00
Chris Lattner a5f58b05e8 clang side to match the LLVM IR type system rewrite patch.
llvm-svn: 134831
2011-07-09 17:41:47 +00:00
Jakub Staszak d2cf2cbae9 Introduce __builtin_expect() intrinsic support.
llvm-svn: 134761
2011-07-08 22:45:14 +00:00
Cameron Zwarich ae7bc98710 Add codegen support for the fma/fmal/fmaf builtins.
llvm-svn: 134743
2011-07-08 21:39:34 +00:00
Bob Wilson 6bc2164d2a Revert "Shorten some ARM builtin names by removing unnecessary "neon" prefix."
Sorry, this was a bad idea.  Within clang these builtins are in a separate
"ARM" namespace, but the actual builtin names should clearly distinguish tha
they are target specific.

llvm-svn: 133833
2011-06-24 22:13:26 +00:00
Bob Wilson 932e5b5d52 Shorten some ARM builtin names by removing unnecessary "neon" prefix.
llvm-svn: 133826
2011-06-24 21:32:46 +00:00
Chris Lattner 845511fe1c update for api change.
llvm-svn: 133365
2011-06-18 22:49:11 +00:00
Bruno Cardoso Lopes 3b0297a98c Update the prefetch intrinsic usage. Now the last argument tells codegen
whether it's a data or instruction cache access.

llvm-svn: 132977
2011-06-14 05:00:30 +00:00
Benjamin Kramer df1fb13a5c Eliminate temporary argument vectors.
llvm-svn: 132260
2011-05-28 14:26:31 +00:00
Bruno Cardoso Lopes fe73374d7a Add support for ARM ldrexd/strexd builtins
llvm-svn: 132249
2011-05-28 04:11:33 +00:00
Bill Wendling bb455154a1 Remove the 'unaligned load' builtins now that they're no longer used in the *mmintrin.h files.
llvm-svn: 131300
2011-05-13 18:52:28 +00:00
Bill Wendling e106c34817 LLVM doesn't always optimize away the four loads from this:
(__m128){ p[0], p[1], p[2], p[3] }

which produces really bad code. This could be done in instcombine, but it's
probably better to do it in the front-end instead.
<rdar://problem/9424836>

llvm-svn: 131237
2011-05-12 19:02:15 +00:00
Bill Wendling 6869b6abf8 Simplification noticed by Chris.
llvm-svn: 130864
2011-05-04 20:28:12 +00:00
Bill Wendling 5f9150b5b1 Convert the non-temporal store builtins to LLVM-native IR.
llvm-svn: 130830
2011-05-04 02:40:38 +00:00
Fariborz Jahanian 24ac1599fc Generalize case for built-in expressions having
side-effect to generate their ir. Not just for
__builtin_expect. // rdar://9330105

llvm-svn: 130172
2011-04-25 23:10:07 +00:00
Fariborz Jahanian 5a866c0bf2 Ir-gen the side-effect(s) when __builtin_expect is
constant-folded. // rdar://9330105

llvm-svn: 130163
2011-04-25 22:30:02 +00:00
Chris Lattner 54fd1a1ad3 fix a crash on code that uses the result value of __builtin___memcpy_chk.
llvm-svn: 129892
2011-04-20 23:14:50 +00:00
Chris Lattner 30107ed600 fold memcpy/set/move_chk to llvm.memcpy/set/move when the sizes
are trivial.  This exposes opportunities earlier, and allows fastisel
to do good things with these at -O0.

This addresses rdar://9289468 - clang doesn't fold memset_chk at -O0

llvm-svn: 129651
2011-04-17 00:40:24 +00:00
Michael J. Spencer 6826eb816a Add 3DNow! Intrinsics.
llvm-svn: 129570
2011-04-15 15:07:13 +00:00
Bill Wendling a865185ad6 Removing the unaligned load tests from builtins-x86.c since they're generated by a regular 'load' now.
llvm-svn: 129464
2011-04-13 20:17:22 +00:00
Bill Wendling 88ae43772a It looks like the FreeBSD buildbot needs this for the builtins-x86.c test.
llvm-svn: 129433
2011-04-13 10:02:54 +00:00
Bill Wendling b9c9e34cb3 Just use a native "load" instead of translating the builtin later. Clang can
take it!

I wasn't able to get __builtin_ia32_loaddqu to transform into an unaligned
load...I'll have to look into it further.

llvm-svn: 129427
2011-04-13 05:58:17 +00:00
Bill Wendling 3137d3cb49 Convert the unaligned load builtins to the first-class versions.
llvm-svn: 129420
2011-04-13 00:36:37 +00:00
Chris Lattner 9cb59fa834 add a __sync_swap builtin to fill out the rest of the __sync builtins.
Patch by Dave Zarzycki!

llvm-svn: 129189
2011-04-09 03:57:26 +00:00
Matt Beaumont-Gay 873c6dd875 Oops, prefer C-style cast here
llvm-svn: 128607
2011-03-31 01:56:27 +00:00
Matt Beaumont-Gay a25fce8e9e Silence GCC warning about differing types on the branches of a conditional expression
llvm-svn: 128605
2011-03-31 01:43:22 +00:00
Bob Wilson 7201af3914 Use intrinsics for Neon vmull operations. Radar 9208957.
llvm-svn: 128590
2011-03-31 00:09:00 +00:00
Jay Foad 20c0f02cc5 Remove PHINode::reserveOperandSpace(). Instead, add a parameter to
PHINode::Create() giving the (known or expected) number of operands.

llvm-svn: 128538
2011-03-30 11:28:58 +00:00