Commit Graph

63 Commits

Author SHA1 Message Date
Diego Novillo 2567f3d0fb Add function entry count metadata.
Summary:
This adds three Function methods to handle function entry counts:
setEntryCount() and getEntryCount().

Entry counts are stored under the MD_prof metadata node with the name
"function_entry_count". They are unsigned 64 bit values set by profilers
(instrumentation and sample profiler changes coming up).

Added documentation for new profile metadata and tests.

Reviewers: dexonsmith, bogner

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9628

llvm-svn: 237260
2015-05-13 15:13:45 +00:00
David Blaikie b340f0a7bc Replace branch-to-unreachable with assertion.
llvm-svn: 236893
2015-05-08 18:52:28 +00:00
Akira Hatanaka 3058d0f080 Let llc and opt override "-target-cpu" and "-target-features" via command line
options.

This commit fixes a bug in llc and opt where "-mcpu" and "-mattr" wouldn't
override function attributes "-target-cpu" and "-target-features" in the IR.

Differential Revision: http://reviews.llvm.org/D9537

llvm-svn: 236677
2015-05-06 23:54:14 +00:00
Sanjoy Das 06cf33fbea Add missing dereferenceable_or_null getters
Summary: Add missing dereferenceable_or_null getters required for
http://reviews.llvm.org/D9253 change. Separated from the D9253 review.

Patch by Artur Pilipenko!

Reviewers: sanjoy

Reviewed By: sanjoy

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D9499

llvm-svn: 236615
2015-05-06 17:41:54 +00:00
Duncan P. N. Exon Smith e2510cdfe8 IR: Add Function metadata attachments
Add IR support for `Metadata` attachments.  Assembly and bitcode support
will follow shortly, but for now we just have unit tests.  This is part
of PR23340.

llvm-svn: 235783
2015-04-24 21:51:02 +00:00
Duncan P. N. Exon Smith 4472b776b0 IR: Use a bitmask to access GlobalObject subclass data
Make room for more than just `Function::isMaterializable()` in the
`GlobalObject` subclass data bitfield.  Since we're treating it like a
bitfield, change `Function::Function()` to zero-out the whole thing.

llvm-svn: 235770
2015-04-24 20:47:23 +00:00
Sanjoy Das 31ea6d1590 [IR] Introduce a dereferenceable_or_null(N) attribute.
Summary:
If a pointer is marked as dereferenceable_or_null(N), LLVM assumes it
is either `null` or `dereferenceable(N)` or both.  This change only
introduces the attribute and adds a token test case for the `llvm-as`
/ `llvm-dis`.  It does not hook up other parts of the optimizer to
actually exploit the attribute -- those changes will come later.

For pointers in address space 0, `dereferenceable(N)` is now exactly
equivalent to `dereferenceable_or_null(N)` && `nonnull`.  For other
address spaces, `dereferenceable(N)` is potentially weaker than
`dereferenceable_or_null(N)` && `nonnull` (since we could have a null
`dereferenceable(N)` pointer).

The motivating case for this change is Java (and other managed
languages), where pointers are either `null` or dereferenceable up to
some usually known-at-compile-time constant offset.

Reviewers: rafael, hfinkel

Reviewed By: hfinkel

Subscribers: nicholas, llvm-commits

Differential Revision: http://reviews.llvm.org/D8650

llvm-svn: 235132
2015-04-16 20:29:50 +00:00
David Blaikie 15d9a4c49c [opaque pointer type] Avoid using PointerType::getElementType when parsing IR
A few calls are left in for error checking - but I'm commenting those
out & trying to build some IR tests (aiming for Argument Promotion to
start with). When I get any of these tests passing I may add flag to
disable the checking so I can add tests that pass with the assertion in
place.

llvm-svn: 234206
2015-04-06 20:59:48 +00:00
Ramkumar Ramachandra 8fcb498a9a InstCombine: propagate deref via new addDereferenceableAttr
The "dereferenceable" attribute cannot be added via .addAttribute(),
since it also expects a size in bytes. AttrBuilder#addAttribute or
AttributeSet#addAttribute is wrapped by classes Function, InvokeInst,
and CallInst. Add corresponding wrappers to
AttrBuilder#addDereferenceableAttr.

Having done this, propagate the dereferenceable attribute via
gc.relocate, adding a test to exercise it. Note that -datalayout is
required during execution over and above -instcombine, because
InstCombine only optionally requires DataLayoutPass.

Differential Revision: http://reviews.llvm.org/D7510

llvm-svn: 229265
2015-02-14 19:37:54 +00:00
Elena Demikhovsky 23a485a4ed Masked Gather and Scatter Intrinsics.
Gather and Scatter are new introduced intrinsics, comming after recently implemented masked load and store.
This is the first patch for Gather and Scatter intrinsics. It includes only the syntax, parsing and verification.

Gather and Scatter intrinsics allow to perform multiple memory accesses (read/write) in one vector instruction.
The intrinsics are not target specific and will have the following syntax:
Gather:
declare <16 x i32> @llvm.masked.gather.v16i32(<16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1> <mask>, <16 x i32> <passthru>)
declare <8 x float> @llvm.masked.gather.v8f32(<8 x float*><vector of ptrs>, i32 <alignment>, <8 x i1> <mask>, <8 x float><passthru>)

Scatter:
declare void @llvm.masked.scatter.v8i32(<8 x i32><vector value to be stored> , <8 x i32*><vector of ptrs> , i32 <alignment>, <8 x i1> <mask>)
declare void @llvm.masked.scatter.v16i32(<16 x i32> <vector value to be stored> , <16 x i32*> <vector of ptrs>, i32 <alignment>, <16 x i1><mask> )

Vector of ptrs - a set of source/destination addresses, to load/store the value. 
Mask - switches on/off vector lanes to prevent memory access for switched-off lanes
vector of ptrs, value and mask should have the same vector width.

These are code examples where gather / scatter should be used and will allow function vectorization
;void foo1(int * restrict A, int * restrict B, int * restrict C) {
; for (int i=0; i<SIZE; i++) {
; A[i] = B[C[i]];
; }
;}

;void foo3(int * restrict A, int * restrict B) {
; for (int i=0; i<SIZE; i++) {
; A[B[i]] = i+5;
; }
;}

Tests will come in the following patches, with CodeGen and Vectorizer.

http://reviews.llvm.org/D7433

llvm-svn: 228521
2015-02-08 08:27:19 +00:00
Philip Reames 56a03938f7 Revert GCStrategy ownership changes
This change reverts the interesting parts of 226311 (and 227046).  This change introduced two problems, and I've been convinced that an alternate approach is preferrable anyways.

The bugs were:
- Registery appears to require all users be within the same linkage unit.  After this change, asking for "statepoint-example" in Transform/ would sometimes get you nullptr, whereas asking the same question in CodeGen would return the right GCStrategy.  The correct long term fix is to get rid of the utter hack which is Registry, but I don't have time for that right now.  227046 appears to have been an attempt to fix this, but I don't believe it does so completely.
- GCMetadataPrinter::finishAssembly was being called more than once per GCStrategy.  Each Strategy was being added to the GCModuleInfo multiple times.

Once I get time again, I'm going to split GCModuleInfo into the gc.root specific part and a GCStrategy owning Analysis pass.  I'm probably also going to kill off the Registry.  Once that's done, I'll move the new GCStrategyAnalysis and all built in GCStrategies into Analysis.  (As original suggested by Chandler.)  This will accomplish my original goal of being able to access GCStrategy from Transform/  without adding all of the builtin GCs to IR/.  

llvm-svn: 227109
2015-01-26 18:26:35 +00:00
Philip Reames 2b45395876 Move ownership of GCStrategy objects to LLVMContext
Note: This change ended up being slightly more controversial than expected.  Chandler has tentatively okayed this for the moment, but I may be revisiting this in the near future after we settle some high level questions.

Rather than have the GCStrategy object owned by the GCModuleInfo - which is an immutable analysis pass used mainly by gc.root - have it be owned by the LLVMContext. This simplifies the ownership logic (i.e. can you have two instances of the same strategy at once?), but more importantly, allows us to access the GCStrategy in the middle end optimizer. To this end, I add an accessor through Function which becomes the canonical way to get at a GCStrategy instance.

In the near future, this will allows me to move some of the checks from http://reviews.llvm.org/D6808 into the Verifier itself, and to introduce optimization legality predicates for some of the recent additions to InstCombine. (These will follow as separate changes.)

Differential Revision: http://reviews.llvm.org/D6811

llvm-svn: 226311
2015-01-16 20:07:33 +00:00
Philip Reames 8d5d68f1aa getMangledTypeStr: clarify how it mangles types, and add tests
"Write a set of tests that show how name mangling is done for overloaded intrinsics."  These happen to use gc.relocates to exercise the codepath in question, but is not a GC specific test.

Patch by: artagnon@gmail.com
Differential Revision: http://reviews.llvm.org/D6915

llvm-svn: 226056
2015-01-14 23:05:17 +00:00
Elena Demikhovsky fb81b93e17 Masked Load/Store - Changed the order of parameters in intrinsics.
No functional changes.
The documentation is coming.

llvm-svn: 224829
2014-12-25 07:49:20 +00:00
Rafael Espindola 366e5c1bf1 The leak detector is dead, long live asan and valgrind.
In resent times asan and valgrind have found way more memory management bugs
in llvm than the special purpose leak detector.

llvm-svn: 224703
2014-12-22 13:00:36 +00:00
Elena Demikhovsky f1de34b84d Masked Load / Store Intrinsics - the CodeGen part.
I'm recommiting the codegen part of the patch.
The vectorizer part will be send to review again.

Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)

Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.

http://reviews.llvm.org/D6191

llvm-svn: 223348
2014-12-04 09:40:44 +00:00
Peter Collingbourne 51d2de7b9e Prologue support
Patch by Ben Gamari!

This redefines the `prefix` attribute introduced previously and
introduces a `prologue` attribute.  There are a two primary usecases
that these attributes aim to serve,

  1. Function prologue sigils

  2. Function hot-patching: Enable the user to insert `nop` operations
     at the beginning of the function which can later be safely replaced
     with a call to some instrumentation facility

  3. Runtime metadata: Allow a compiler to insert data for use by the
     runtime during execution. GHC is one example of a compiler that
     needs this functionality for its tables-next-to-code functionality.

Previously `prefix` served cases (1) and (2) quite well by allowing the user
to introduce arbitrary data at the entrypoint but before the function
body. Case (3), however, was poorly handled by this approach as it
required that prefix data was valid executable code.

Here we redefine the notion of prefix data to instead be data which
occurs immediately before the function entrypoint (i.e. the symbol
address). Since prefix data now occurs before the function entrypoint,
there is no need for the data to be valid code.

The previous notion of prefix data now goes under the name "prologue
data" to emphasize its duality with the function epilogue.

The intention here is to handle cases (1) and (2) with prologue data and
case (3) with prefix data.

References
----------

This idea arose out of discussions[1] with Reid Kleckner in response to a
proposal to introduce the notion of symbol offsets to enable handling of
case (3).

[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-May/073235.html

Test Plan: testsuite

Differential Revision: http://reviews.llvm.org/D6454

llvm-svn: 223189
2014-12-03 02:08:38 +00:00
Duncan P. N. Exon Smith 9bc81fbe92 Revert "Masked Vector Load and Store Intrinsics."
This reverts commit r222632 (and follow-up r222636), which caused a host
of LNT failures on an internal bot.  I'll respond to the commit on the
list with a reproduction of one of the failures.

Conflicts:
	lib/Target/X86/X86TargetTransformInfo.cpp

llvm-svn: 222936
2014-11-28 21:29:14 +00:00
Philip Reames 059ecbfb58 Incorporate review comments from r221742
This change implements the comment and style changes Sean requested during post commit review with r221742.  Sorry for the delay.

llvm-svn: 222707
2014-11-24 23:24:24 +00:00
Elena Demikhovsky 9e5089a938 Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)

Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.

http://reviews.llvm.org/D6191

llvm-svn: 222632
2014-11-23 08:07:43 +00:00
Philip Reames 319c48eb2d Extend intrinsic name mangling to support arrays, named structs, and function types.
Currently, we have a type parameter mechanism for intrinsics. Rather than having to specify a separate intrinsic for each combination of argument and return types, we can specify a single intrinsic with one or more type parameters. These type parameters are passed explicitly to Intrinsic::getDeclaration or can be specified implicitly in the naming of the intrinsic function in an LL file.

Today, the types are limited to integer, floating point, and pointer types. With a goal of supporting symbolic targets for patchpoints and statepoints, this change adds support for function types.  The change also includes support for first class aggregate types (named structures and arrays) since these appear in function types we've encountered.  

Reviewed by: atrick, ributzka
Differential Revision: http://reviews.llvm.org/D4608

llvm-svn: 221742
2014-11-12 00:21:51 +00:00
Rafael Espindola d4bcefc7d9 Don't ever call materializeAllPermanently during LTO.
To do this, change the representation of lazy loaded functions.

The previous representation cannot differentiate between a function whose body
has been removed and one whose body hasn't been read from the .bc file. That
means that in order to drop a function, the entire body had to be read.

llvm-svn: 220580
2014-10-24 18:13:04 +00:00
Rafael Espindola 1b47a28ff9 clang-format two code snippets to make the next patch easy to read.
llvm-svn: 220484
2014-10-23 15:20:05 +00:00
Robert Khasanov 65c2756869 Moved out IIT_V64 from common values section.
Thanks Juergen Ributzka for notice.

llvm-svn: 220224
2014-10-20 19:25:05 +00:00
Steven Wu d05affcc81 Fix Intrinsic::getType not working with vararg
VarArg Intrinsic functions are encoded with "void" type as the last
argument. Now Intrinsic::getType can correctly return all the intrinsic
function type.

llvm-svn: 220205
2014-10-20 15:47:24 +00:00
Robert Khasanov b25e562d14 [AVX512] Added intrinsics for VPCMPEQB and VPCMPEQW.
Added new operand type for intrinsics (IIT_V64)

llvm-svn: 218668
2014-09-30 11:32:22 +00:00
Craig Topper e1d1294853 Simplify creation of a bunch of ArrayRefs by using None, makeArrayRef or just letting them be implicitly created.
llvm-svn: 216525
2014-08-27 05:25:25 +00:00
Juergen Ributzka 384c3b5c03 Provide convenient access to the zext/sext attributes of function arguments. NFC.
llvm-svn: 214843
2014-08-05 05:43:41 +00:00
Hal Finkel b0407ba071 Add a dereferenceable attribute
This attribute indicates that the parameter or return pointer is
dereferenceable. Practically speaking, loads from such a pointer within the
associated byte range are safe to speculatively execute. Such pointer
parameters are common in source languages (C++ references, for example).

llvm-svn: 213385
2014-07-18 15:51:28 +00:00
Saleem Abdulrasool 4e63fc498c TableGen: introduce support for MSBuiltin
Add MSBuiltin which is similar in vein to GCCBuiltin.  This allows for adding
intrinsics for Microsoft compatibility to individual instructions.  This is
needed to permit the creation of ARM specific MSVC extensions.

This is not currently in use, and requires an associated change in clang to
enable use of the intrinsics defined by this new class.  This merely sets the
LLVM portion of the infrastructure in place to permit the use of this
functionality.  A separate set of changes will enable the new intrinsics.

llvm-svn: 212350
2014-07-04 18:42:25 +00:00
Nick Lewycky d52b1528c0 Add 'nonnull', a new parameter and return attribute which indicates that the pointer is not null. Instcombine will elide comparisons between these and null. Patch by Luqman Aden!
llvm-svn: 209185
2014-05-20 01:23:40 +00:00
Rafael Espindola 99e05cf163 Split GlobalValue into GlobalValue and GlobalObject.
This allows code to statically accept a Function or a GlobalVariable, but
not an alias. This is already a cleanup by itself IMHO, but the main
reason for it is that it gives a lot more confidence that the refactoring to fix
the design of GlobalAlias is correct. That will be a followup patch.

llvm-svn: 208716
2014-05-13 18:45:48 +00:00
Reid Kleckner 7941856445 Allow sret on the second parameter as well as the first
MSVC always places the implicit sret parameter after the implicit this
parameter of instance methods.  We used to handle this for
x86_thiscallcc by allocating the sret parameter on the stack and leaving
the this pointer in ecx, but that doesn't handle alternative calling
conventions like cdecl, stdcall, fastcall, or the win64 convention.

Instead, change the verifier to allow sret on the second parameter.

This also requires changing the Mips and X86 backends to return the
argument with the sret parameter, instead of assuming that the sret
parameter comes first.

The Sparc backend also returns sret parameters in a register, but I
wasn't able to update it to handle secondary sret parameters.  It
currently calls report_fatal_error if you feed it an sret in the second
parameter.

Reviewers: rafael.espindola, majnemer

Differential Revision: http://reviews.llvm.org/D3617

llvm-svn: 208453
2014-05-09 22:32:13 +00:00
Rafael Espindola b2ea33975f Run clang-format in small sections of code to make a patch easier to read.
llvm-svn: 208419
2014-05-09 15:49:02 +00:00
Craig Topper c620761ca5 [C++11] More 'nullptr' conversion or in some cases just using a boolean check instead of comparing to nullptr.
llvm-svn: 205831
2014-04-09 06:08:46 +00:00
Tim Northover 4516de3412 Intrinsics: add LLVMHalfElementsVectorType constraint
This is like the LLVMMatchType, except the verifier checks that the
second argument is a vector with the same base type and half the
number of elements.

This will be used by the ARM64 backend.

llvm-svn: 205079
2014-03-29 07:04:54 +00:00
Tim Northover aa3cf1e691 Intrinsics: expand semantics of LLVMExtendedVectorType (& trunc)
These are used in the ARM backends to aid type-checking on patterns involving
intrinsics. By making sure one argument is an extended/truncated version of
another.

However, there's no reason to limit them to just vectors types. For example
AArch64 has the instruction "uqshrn sD, dN, #imm" which would naturally use an
intrinsic taking an i64 and returning an i32.

llvm-svn: 205003
2014-03-28 12:31:39 +00:00
Evan Cheng ad6efbfa0f Revert r203488 and r203520.
llvm-svn: 203687
2014-03-12 18:09:37 +00:00
Evan Cheng 0e8f4612a9 For functions with ARM target specific calling convention, when simplify-libcall
optimize a call to a llvm intrinsic to something that invovles a call to a C
library call, make sure it sets the right calling convention on the call.

e.g.
extern double pow(double, double);
double t(double x) {
  return pow(10, x);
}

Compiles to something like this for AAPCS-VFP:
define arm_aapcs_vfpcc double @t(double %x) #0 {
entry:
  %0 = call double @llvm.pow.f64(double 1.000000e+01, double %x)
  ret double %0
}

declare double @llvm.pow.f64(double, double) #1

Simplify libcall (part of instcombine) will turn the above into:
define arm_aapcs_vfpcc double @t(double %x) #0 {
entry:
  %__exp10 = call double @__exp10(double %x) #1
  ret double %__exp10
}

declare double @__exp10(double)

The pre-instcombine code works because calls to LLVM builtins are special.
Instruction selection will chose the right calling convention for the call.
However, the code after instcombine is wrong. The call to __exp10 will use
the C calling convention.

I can think of 3 options to fix this.

1. Make "C" calling convention just work since the target should know what CC
   is being used.

   This doesn't work because each function can use different CC with the "pcs"
   attribute.

2. Have Clang add the right CC keyword on the calls to LLVM builtin.

   This will work but it doesn't match the LLVM IR specification which states
   these are "Standard C Library Intrinsics".

3. Fix simplify libcall so the resulting calls to the C routines will have the
   proper CC keyword. e.g.
   %__exp10 = call arm_aapcs_vfpcc double @__exp10(double %x) #1

   This works and is the solution I implemented here.

Both solutions #2 and #3 would work. After carefully considering the pros and
cons, I decided to implement #3 for the following reasons.

1. It doesn't change the "spec" of the intrinsics.
2. It's a self-contained fix.

There are a couple of potential downsides.
1. There could be other places in the optimizer that is broken in the same way
   that's not addressed by this.
2. There could be other calling conventions that need to be propagated by
   simplify-libcall that's not handled.

But for now, this is the fix that I'm most comfortable with.

llvm-svn: 203488
2014-03-10 20:49:45 +00:00
Chandler Carruth cdf4788401 [C++11] Add range based accessors for the Use-Def chain of a Value.
This requires a number of steps.
1) Move value_use_iterator into the Value class as an implementation
   detail
2) Change it to actually be a *Use* iterator rather than a *User*
   iterator.
3) Add an adaptor which is a User iterator that always looks through the
   Use to the User.
4) Wrap these in Value::use_iterator and Value::user_iterator typedefs.
5) Add the range adaptors as Value::uses() and Value::users().
6) Update *all* of the callers to correctly distinguish between whether
   they wanted a use_iterator (and to explicitly dig out the User when
   needed), or a user_iterator which makes the Use itself totally
   opaque.

Because #6 requires churning essentially everything that walked the
Use-Def chains, I went ahead and added all of the range adaptors and
switched them to range-based loops where appropriate. Also because the
renaming requires at least churning every line of code, it didn't make
any sense to split these up into multiple commits -- all of which would
touch all of the same lies of code.

The result is still not quite optimal. The Value::use_iterator is a nice
regular iterator, but Value::user_iterator is an iterator over User*s
rather than over the User objects themselves. As a consequence, it fits
a bit awkwardly into the range-based world and it has the weird
extra-dereferencing 'operator->' that so many of our iterators have.
I think this could be fixed by providing something which transforms
a range of T&s into a range of T*s, but that *can* be separated into
another patch, and it isn't yet 100% clear whether this is the right
move.

However, this change gets us most of the benefit and cleans up
a substantial amount of code around Use and User. =]

llvm-svn: 203364
2014-03-09 03:16:01 +00:00
Chandler Carruth 4b6845c7e7 [Modules] Move the LeakDetector header into the IR library where the
source file had already been moved. Also move the unittest into the IR
unittest library.

This may seem an odd thing to put in the IR library but we only really
use this with instructions and it needs the LLVM context to work, so it
is intrinsically tied to the IR library.

llvm-svn: 202842
2014-03-04 12:46:06 +00:00
Chandler Carruth 219b89b987 [Modules] Move CallSite into the IR library where it belogs. It is
abstracting between a CallInst and an InvokeInst, both of which are IR
concepts.

llvm-svn: 202816
2014-03-04 11:01:28 +00:00
Chandler Carruth 8394857f43 [Modules] Move InstIterator out of the Support library, where it had no
business.

This header includes Function and BasicBlock and directly uses the
interfaces of both classes. It has to do with the IR, it even has that
in the name. =] Put it in the library it belongs to.

This is one step toward making LLVM's Support library survive a C++
modules bootstrap.

llvm-svn: 202814
2014-03-04 10:30:26 +00:00
Reid Kleckner 436c42ec3d Add an inalloca flag to allocas
Summary:
The only current use of this flag is to mark the alloca as dynamic, even
if its in the entry block.  The stack adjustment for the alloca can
never be folded into the prologue because the call may clear it and it
has to be allocated at the top of the stack.

Reviewers: majnemer

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2571

llvm-svn: 199525
2014-01-17 23:58:17 +00:00
Mark Seaborn 8271118a65 Fix llc to not reuse spill slots in functions that invoke setjmp()
We need to ensure that StackSlotColoring.cpp does not reuse stack
spill slots in functions that call "returns_twice" functions such as
setjmp(), otherwise this can lead to miscompiled code, because a stack
slot would be clobbered when it's still live.

This was already handled correctly for functions that call setjmp()
(though this wasn't covered by a test), but not for functions that
invoke setjmp().

We fix this by changing callsFunctionThatReturnsTwice() to check for
invoke instructions.

This fixes PR18244.

llvm-svn: 199180
2014-01-14 04:20:01 +00:00
Reid Kleckner a534a38130 Begin adding docs and IR-level support for the inalloca attribute
The inalloca attribute is designed to support passing C++ objects by
value in the Microsoft C++ ABI.  It behaves the same as byval, except
that it always implies that the argument is in memory and that the bytes
are never copied.  This attribute allows the caller to take the address
of an outgoing argument's memory and execute arbitrary code to store
into it.

This patch adds basic IR support, docs, and verification.  It does not
attempt to implement any lowering or fix any possibly broken transforms.

When this patch lands, a complete description of this feature should
appear at http://llvm.org/docs/InAlloca.html .

Differential Revision: http://llvm-reviews.chandlerc.com/D2173

llvm-svn: 197645
2013-12-19 02:14:12 +00:00
Andrew Trick a2efd99bdf Enable variable arguments support for intrinsics.
llvm-svn: 193766
2013-10-31 17:18:11 +00:00
Jiangning Liu 63dc840fc5 Initial support for Neon scalar instructions.
Patch by Ana Pazos.

1.Added support for v1ix and v1fx types.
2.Added Scalar Pairwise Reduce instructions.
3.Added initial implementation of Scalar Arithmetic instructions.

llvm-svn: 191263
2013-09-24 02:47:27 +00:00
Peter Collingbourne 3fa50f9b05 Implement function prefix data as an IR feature.
Previous discussion:
http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-July/063909.html

Differential Revision: http://llvm-reviews.chandlerc.com/D1191

llvm-svn: 190773
2013-09-16 01:08:15 +00:00
Nick Lewycky c2ec0725ce Extend 'readonly' and 'readnone' to work on function arguments as well as
functions. Make the function attributes pass add it to known library functions
and when it can deduce it.

llvm-svn: 185735
2013-07-06 00:29:58 +00:00