Commit Graph

48 Commits

Author SHA1 Message Date
Chandler Carruth a004f22a2d [inliner] Fix the early-exit of the inline cost analysis to correctly
model the dense vector instruction bonuses.

Previously, this code really didn't effectively compute the density of
inlined vector instructions and apply the intended inliner bonus. It
would try to compute it repeatedly while analyzing the function and
didn't handle the case where future vector instructions would tip the
scales back towards the bonus.

Instead, speculatively apply all possible bonuses to the threshold
initially. Once we *know* that a certain bonus can not be applied,
subtract it. This should delay early bailout enough to get much more
consistent results without actually causing us to analyze huge swaths of
code. I expect some (hopefully mild) compile time hit here, and some
swings in performance, but this was definitely the intended behavior of
these bonuses.

This also dramatically simplifies the computation of the bonuses to not
interact with each other in confusing ways. The previous code didn't do
a good job of this and the values for bonuses may be surprising but are
at least now clearly written in the code.

Finally, fix code to be in line with comments and use zero as the
bailout condition.

Patch by Easwaran Raman, with some comment tweaks by me to try and
further clarify what is going on with this code.

http://reviews.llvm.org/D8267

llvm-svn: 238276
2015-05-27 02:49:05 +00:00
Reid Kleckner 223de262b9 [Inliner] Don't inline functions with frameescape calls
Inlining such intrinsics is very difficult, since you need to
simultaneously transform many calls to llvm.framerecover and potentially
duplicate the functions containing them.  Normally this intrinsic isn't
added until EH preparation, which is part of the backend pass pipeline
after inlining.  However, if it were to get fed through the inliner,
this change will ensure that it doesn't break the code.

llvm-svn: 234937
2015-04-14 20:38:14 +00:00
Akira Hatanaka f99e1913ae [inliner] Don't inline a function if it doesn't have exactly the same
target-cpu and target-features attribute strings as the caller.

Differential Revision: http://reviews.llvm.org/D8984

llvm-svn: 234773
2015-04-13 18:43:38 +00:00
Wei Mi 6c428d6ff6 Correctly estimate SROA savings for store operands in inline cost analysis.
When estimating SROA savings, we want to see if an address is derived
off an alloca in the caller. For store instructions, operand 1 is the
address operand, but the current code uses operand 0.  Use
getPointerOperand for loads and stores to fix this.

Patch by Easwaran Raman.
http://reviews.llvm.org/D8425

llvm-svn: 232827
2015-03-20 18:33:12 +00:00
Mehdi Amini a28d91d81b DataLayout is mandatory, update the API to reflect it with references.
Summary:
Now that the DataLayout is a mandatory part of the module, let's start
cleaning the codebase. This patch is a first attempt at doing that.

This patch is not exactly NFC as for instance some places were passing
a nullptr instead of the DataLayout, possibly just because there was a
default value on the DataLayout argument to many functions in the API.
Even though it is not purely NFC, there is no change in the
validation.

I turned as many pointer to DataLayout to references, this helped
figuring out all the places where a nullptr could come up.

I had initially a local version of this patch broken into over 30
independant, commits but some later commit were cleaning the API and
touching part of the code modified in the previous commits, so it
seemed cleaner without the intermediate state.

Test Plan:

Reviewers: echristo

Subscribers: llvm-commits

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231740
2015-03-10 02:37:25 +00:00
Mehdi Amini 46a43556db Make DataLayout Non-Optional in the Module
Summary:
DataLayout keeps the string used for its creation.

As a side effect it is no longer needed in the Module.
This is "almost" NFC, the string is no longer
canonicalized, you can't rely on two "equals" DataLayout
having the same string returned by getStringRepresentation().

Get rid of DataLayoutPass: the DataLayout is in the Module

The DataLayout is "per-module", let's enforce this by not
duplicating it more than necessary.
One more step toward non-optionality of the DataLayout in the
module.

Make DataLayout Non-Optional in the Module

Module->getDataLayout() will never returns nullptr anymore.

Reviewers: echristo

Subscribers: resistor, llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D7992

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231270
2015-03-04 18:43:29 +00:00
Mehdi Amini 9a9738f6e5 Remove getDataLayout() from Instruction/GlobalValue/BasicBlock/Function
Summary:
This does not conceptually belongs here. Instead provide a shortcut
getModule() that provides access to the DataLayout.

Reviewers: chandlerc, echristo

Reviewed By: echristo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8027

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231147
2015-03-03 22:01:13 +00:00
Duncan P. N. Exon Smith b3fc83c403 Analysis: Canonicalize access to function attributes, NFC
Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
  => getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
  => hasFnAttribute(Kind)

llvm-svn: 229192
2015-02-14 00:12:15 +00:00
Bjorn Steinbrink 6f972a13f6 Fix a crash in the assumption cache when inlining indirect function calls
Summary:
Instances of the AssumptionCache are per function, so we can't re-use
the same AssumptionCache instance when recursing in the CallAnalyzer to
analyze a different function. Instead we have to pass the
AssumptionCacheTracker to the CallAnalyzer so it can get the right
AssumptionCache on demand.

Reviewers: hfinkel

Subscribers: llvm-commits, hans

Differential Revision: http://reviews.llvm.org/D7533

llvm-svn: 228957
2015-02-12 21:04:22 +00:00
Michael Zolotukhin 4e8598eee3 [InstSimplify] Add SimplifyFPBinOp function.
It is a variation of SimplifyBinOp, but it takes into account
FastMathFlags.

It is needed in inliner and loop-unroller to accurately predict the
transformation's outcome (previously we dropped the flags and were too
conservative in some cases).

Example:
float foo(float *a, float b) {
 float r;
 if (a[1] * b)
   r = /* a lot of expensive computations */;
 else
   r = 1;
 return r;
}
float boo(float *a) {
 return foo(a, 0.0);
}

Without this patch, we don't inline 'foo' into 'boo'.

llvm-svn: 228432
2015-02-06 20:02:51 +00:00
Cameron Esfahani 17177d1e84 Value soft float calls as more expensive in the inliner.
Summary: When evaluating floating point instructions in the inliner, ask the TTI whether it is an expensive operation.  By default, it's not an expensive operation.  This keeps the default behavior the same as before.  The ARM TTI has been updated to return back TCC_Expensive for targets which don't have hardware floating point.

Reviewers: chandlerc, echristo

Reviewed By: echristo

Subscribers: t.p.northover, aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D6936

llvm-svn: 228263
2015-02-05 02:09:33 +00:00
Chandler Carruth fdb9c573f7 [multiversion] Thread a function argument through all the callers of the
getTTI method used to get an actual TTI object.

No functionality changed. This just threads the argument and ensures
code like the inliner can correctly look up the callee's TTI rather than
using a fixed one.

The next change will use this to implement per-function subtarget usage
by TTI. The changes after that should eliminate the need for FTTI as that
will have become the default.

llvm-svn: 227730
2015-02-01 12:01:35 +00:00
Chandler Carruth 705b185f90 [PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.

The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.

I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.

There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.

The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.

Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.

The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]

Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:

1) Improving the TargetMachine interface by having it directly return
   a TTI object. Because we have a non-pass object with value semantics
   and an internal type erasure mechanism, we can narrow the interface
   of the TargetMachine to *just* do what we need: build and return
   a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
   This will include splitting off a minimal form of it which is
   sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
   target machine for each function. This may actually be done as part
   of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
   easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
   easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
   just a bit messy and exacerbating the complexity of implementing
   the TTI in each target.

Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.

Differential Revision: http://reviews.llvm.org/D7293

llvm-svn: 227669
2015-01-31 03:43:40 +00:00
Chandler Carruth d9903888d9 [cleanup] Re-sort all the #include lines in LLVM using
utils/sort_includes.py.

I clearly haven't done this in a while, so more changed than usual. This
even uncovered a missing include from the InstrProf library that I've
added. No functionality changed here, just mechanical cleanup of the
include order.

llvm-svn: 225974
2015-01-14 11:23:27 +00:00
Chandler Carruth 66b3130cda [PM] Split the AssumptionTracker immutable pass into two separate APIs:
a cache of assumptions for a single function, and an immutable pass that
manages those caches.

The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.

Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.

For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.

llvm-svn: 225131
2015-01-04 12:03:27 +00:00
David Blaikie 70573dcd9f Update SetVector to rely on the underlying set's insert to return a pair<iterator, bool>
This is to be consistent with StringSet and ultimately with the standard
library's associative container insert function.

This lead to updating SmallSet::insert to return pair<iterator, bool>,
and then to update SmallPtrSet::insert to return pair<iterator, bool>,
and then to update all the existing users of those functions...

llvm-svn: 222334
2014-11-19 07:49:26 +00:00
Hal Finkel 57f03dda49 Add functions for finding ephemeral values
This adds a set of utility functions for collecting 'ephemeral' values. These
are LLVM IR values that are used only by @llvm.assume intrinsics (directly or
indirectly), and thus will be removed prior to code generation, implying that
they should be considered free for certain purposes (like inlining). The
inliner's cost analysis, and a few other passes, have been updated to account
for ephemeral values using the provided functionality.

This functionality is important for the usability of @llvm.assume, because it
limits the "non-local" side-effects of adding llvm.assume on inlining, loop
unrolling, etc. (these are hints, and do not generate code, so they should not
directly contribute to estimates of execution cost).

llvm-svn: 217335
2014-09-07 13:49:57 +00:00
Gerolf Hoflehner 734f4c8984 Suppress inlining when the block address is taken
Inlining functions with block addresses can cause many problem and requires a
rich infrastructure to support including escape analysis.  At this point the
safest approach to address these problems is by blocking inlining from
happening.

Background:
There have been reports on Ruby segmentation faults triggered by inlining
functions with block addresses like

//Ruby code snippet
vm_exec_core() {
    finish_insn_seq_0 = &&INSN_LABEL_finish;
    INSN_LABEL_finish:
      ;
}

This kind of scenario can also happen when LLVM picks a subset of blocks for
inlining, which is the case with the actual code in the Ruby environment.

LLVM suppresses inlining for such functions when there is an indirect branch.
The attached patch does so even when there is no indirect branch.  Note that
user code like above would not make much sense: using the global for jumping
across function boundaries would be illegal.

Why was there a segfault:

In the snipped above the block with the label is recognized as dead So it is
eliminated. Instead of a block address the cloner stores a constant (sic!) into
the global resulting in the segfault (when the global is used in a goto).

Why had it worked in the past then:

By luck. In older versions vm_exec_core was also inlined but the label address
used was the block label address in vm_exec_core.  So the global jump ended up
in the original function rather than in the caller which accidentally happened
to work.

Test case ./tools/clang/test/CodeGen/indirect-goto.c will fail as a result
of this commit.

rdar://17245966

llvm-svn: 212077
2014-07-01 00:19:34 +00:00
Peter Collingbourne 68a889757d Check the alwaysinline attribute on the call as well as on the caller.
Differential Revision: http://reviews.llvm.org/D3815

llvm-svn: 209150
2014-05-19 18:25:54 +00:00
Chandler Carruth e01fd5f63a [inliner] Significantly improve the compile time in cases like PR19499
by avoiding inlining massive switches merely because they have no
instructions in them. These switches still show up where we fail to form
lookup tables, and in those cases they are actually going to cause
a very significant code size hit anyways, so inlining them is not the
right call. The right way to fix any performance regressions stemming
from this is to enhance the switch-to-lookup-table logic to fire in more
places.

This makes PR19499 about 5x less bad. It uncovers a second compile time
problem in that test case that is unrelated (surprisingly!).

llvm-svn: 207403
2014-04-28 08:52:44 +00:00
Craig Topper 353eda484c [C++] Use 'nullptr'.
llvm-svn: 207083
2014-04-24 06:44:33 +00:00
Chandler Carruth f1221bd01b [Modules] Fix potential ODR violations by sinking the DEBUG_TYPE
definition below all the header #include lines, lib/Analysis/...
edition.

This one has a bit extra as there were *other* #define's before #include
lines in addition to DEBUG_TYPE. I've sunk all of them as a block.

llvm-svn: 206843
2014-04-22 02:48:03 +00:00
Nuno Lopes 9ced19abe8 remove some dead code
lib/Analysis/IPA/InlineCost.cpp         |   18 ------------------
 lib/Analysis/RegionPass.cpp             |    1 -
 lib/Analysis/TypeBasedAliasAnalysis.cpp |    1 -
 lib/Transforms/Scalar/LoopUnswitch.cpp  |   21 ---------------------
 lib/Transforms/Utils/LCSSA.cpp          |    2 --
 lib/Transforms/Utils/LoopSimplify.cpp   |    6 ------
 utils/TableGen/AsmWriterEmitter.cpp     |   13 -------------
 utils/TableGen/DFAPacketizerEmitter.cpp |    7 -------
 utils/TableGen/IntrinsicEmitter.cpp     |    2 --
 9 files changed, 71 deletions(-)

llvm-svn: 206506
2014-04-17 22:26:44 +00:00
Gerolf Hoflehner ecebc3730e Reverse 206485.
After some discussions the preferred semantics of
the always_inline attribute is
inline always when the compiler can determine
that it it safe to do so.

llvm-svn: 206487
2014-04-17 19:14:06 +00:00
Gerolf Hoflehner 5f6268a40e Inline a function when the always_inline attribute
is set even when it contains a indirect branch.
The attribute overrules correctness concerns
like the escape of a local block address.

This is for rdar://16501761

llvm-svn: 206429
2014-04-17 00:21:52 +00:00
Eric Christopher beb2cd6b7c Handle vlas during inline cost computation if they'll be turned
into a constant size alloca by inlining.

Ran a run over the testsuite, no results out of the noise, fixes
the testcase in the PR.

PR19115.

llvm-svn: 205710
2014-04-07 13:36:21 +00:00
Eli Bendersky 576ef3c667 Consistent use of the noduplicate attribute.
The "noduplicate" attribute of call instructions is sometimes queried directly
and sometimes through the cannotDuplicate() predicate. This patch streamlines
all queries to use the cannotDuplicate() predicate. It also adds this predicate
to InvokeInst, to mirror what CallInst has.

llvm-svn: 204049
2014-03-17 16:19:07 +00:00
Chandler Carruth cdf4788401 [C++11] Add range based accessors for the Use-Def chain of a Value.
This requires a number of steps.
1) Move value_use_iterator into the Value class as an implementation
   detail
2) Change it to actually be a *Use* iterator rather than a *User*
   iterator.
3) Add an adaptor which is a User iterator that always looks through the
   Use to the User.
4) Wrap these in Value::use_iterator and Value::user_iterator typedefs.
5) Add the range adaptors as Value::uses() and Value::users().
6) Update *all* of the callers to correctly distinguish between whether
   they wanted a use_iterator (and to explicitly dig out the User when
   needed), or a user_iterator which makes the Use itself totally
   opaque.

Because #6 requires churning essentially everything that walked the
Use-Def chains, I went ahead and added all of the range adaptors and
switched them to range-based loops where appropriate. Also because the
renaming requires at least churning every line of code, it didn't make
any sense to split these up into multiple commits -- all of which would
touch all of the same lies of code.

The result is still not quite optimal. The Value::use_iterator is a nice
regular iterator, but Value::user_iterator is an iterator over User*s
rather than over the User objects themselves. As a consequence, it fits
a bit awkwardly into the range-based world and it has the weird
extra-dereferencing 'operator->' that so many of our iterators have.
I think this could be fixed by providing something which transforms
a range of T&s into a range of T*s, but that *can* be separated into
another patch, and it isn't yet 100% clear whether this is the right
move.

However, this change gets us most of the benefit and cleans up
a substantial amount of code around Use and User. =]

llvm-svn: 203364
2014-03-09 03:16:01 +00:00
Chandler Carruth 7da14f1ab9 [Layering] Move InstVisitor.h into the IR library as it is pretty
obviously coupled to the IR.

llvm-svn: 203064
2014-03-06 03:23:41 +00:00
Chandler Carruth 219b89b987 [Modules] Move CallSite into the IR library where it belogs. It is
abstracting between a CallInst and an InvokeInst, both of which are IR
concepts.

llvm-svn: 202816
2014-03-04 11:01:28 +00:00
Chandler Carruth 03eb0de93d [Modules] Move GetElementPtrTypeIterator into the IR library. As its
name might indicate, it is an iterator over the types in an instruction
in the IR.... You see where this is going.

Another step of modularizing the support library.

llvm-svn: 202815
2014-03-04 10:40:04 +00:00
Benjamin Kramer d6f1f84f51 [C++11] Replace llvm::tie with std::tie.
The old implementation is no longer needed in C++11.

llvm-svn: 202644
2014-03-02 13:30:33 +00:00
Eric Christopher a13839f5ca Remove unnecessary llvm:: qualification.
llvm-svn: 202316
2014-02-26 23:27:16 +00:00
Rafael Espindola 339430f993 Use DataLayout from the module when easily available.
Eventually DataLayoutPass should go away, but for now that is the only easy
way to get a DataLayout in some APIs. This patch only changes the ones that
have easy access to a Module.

One interesting issue with sometimes using DataLayoutPass and sometimes
fetching it from the Module is that we have to make sure they are equivalent.
We can get most of the way there by always constructing the pass with a Module.
In fact, the pass could be changed to point to an external DataLayout instead
of owning one to make this stricter.

Unfortunately, the C api passes a DataLayout, so it has to be up to the caller
to make sure the pass and the module are in sync.

llvm-svn: 202204
2014-02-25 23:25:17 +00:00
Rafael Espindola 935125126c Make DataLayout a plain object, not a pass.
Instead, have a DataLayoutPass that holds one. This will allow parts of LLVM
don't don't handle passes to also use DataLayout.

llvm-svn: 202168
2014-02-25 17:30:31 +00:00
Rafael Espindola 37dc9e19f5 Rename many DataLayout variables from TD to DL.
I am really sorry for the noise, but the current state where some parts of the
code use TD (from the old name: TargetData) and other parts use DL makes it
hard to write a patch that changes where those variables come from and how
they are passed along.

llvm-svn: 201827
2014-02-21 00:06:31 +00:00
Rafael Espindola 7c68bebb9c Rename some member variables from TD to DL.
TargetData was renamed DataLayout back in r165242.

llvm-svn: 201581
2014-02-18 15:33:12 +00:00
Chandler Carruth 6b4cc8b66a [inliner] Skip debug intrinsics even earlier in computing the inline
cost so that they don't impact the vector bonus. Fundamentally, counting
unsimplified instructions is just *wrong*; it will continue to introduce
instability as things which do not generate code bizarrely impact
inlining. For example, sufficiently nested inlined functions could turn
off the vector bonus with lifetime markers just like the debug
intrinsics do. =/

This is a short-term tactical fix. Long term, I think we need to remove
the vector bonus entirely. That's a separate patch and discussion
though.

The patch to fix this provided by Dario Domizioli. I've added some
comments about the planned direction and used a heavily pruned form of
debug info intrinsics for the test case. While this debug info doesn't
work or "do" anything useful, it lets us easily test all manner of
interference easily, and I suspect this will not be the last time we
want to craft a pattern where debug info interferes with the inliner in
a problematic way.

llvm-svn: 200609
2014-02-01 10:38:17 +00:00
Chandler Carruth 394e34f5c2 [inliner] Print out extra stats about the cost, threshold, and vector
bonus in the inline cost analysis.

Split out of a patch by Dario Domizioli to commit separately.

llvm-svn: 200586
2014-01-31 22:32:32 +00:00
Chandler Carruth 37d25de459 [inliner] Fix PR18206 by preventing inlining functions that call setjmp
through an invoke instruction.

The original patch for this was written by Mark Seaborn, but I've
reworked his test case into the existing returns_twice test case and
implemented the fix by the prior refactoring to actually run the cost
analysis over invoke instructions, and then here fixing our detection of
the returns_twice attribute to work for both calls and invokes. We never
noticed because we never saw an invoke. =[

llvm-svn: 197216
2013-12-13 08:00:01 +00:00
Chandler Carruth 0814d2adf0 [inliner] Completely change (and fix) how the inline cost analysis
handles terminator instructions.

The inline cost analysis inheritted some pretty rough handling of
terminator insts from the original cost analysis, and then made it much,
much worse by factoring all of the important analyses into a separate
instruction visitor. That instruction visitor never visited the
terminator.

This works fine for things like conditional branches, but for many other
things we simply computed The Wrong Value. First example are
unconditional branches, which should be free but were counted as full
cost. This is most significant for conditional branches where the
condition simplifies and folds during inlining. We paid a 1 instruction
tax on every branch in a straight line specialized path. =[

Oh, we also claimed that the unreachable instruction had cost.

But it gets worse. Let's consider invoke. We never applied the call
penalty. We never accounted for the cost of the arguments. Nope. Worse
still, we didn't handle the *correctness* constraints of not inlining
recursive invokes, or exception throwing returns_twice functions. Oops.
See PR18206. Sadly, PR18206 requires yet another fix, but this
refactoring is at least a huge step in that direction.

llvm-svn: 197215
2013-12-13 07:59:56 +00:00
Chandler Carruth cb5beb347a [cleanup] Remove trailing whitespace before I start changing this file.
llvm-svn: 197149
2013-12-12 11:59:26 +00:00
Paul Robinson dcbe35bad5 The 'optnone' attribute means don't inline anything into this function
(except functions marked always_inline).
Functions with 'optnone' must also have 'noinline' so they don't get
inlined into any other function.

Based on work by Andrea Di Biagio.

llvm-svn: 195046
2013-11-18 21:44:03 +00:00
Evgeniy Stepanov 2ad3698b70 Disable inlining between sanitized and non-sanitized functions.
Inlining between functions with different values of sanitize_* attributes
leads to over- or under-sanitizing, which is always bad.

llvm-svn: 187967
2013-08-08 08:22:39 +00:00
Matt Arsenault 727aa349ad Have InlineCost check constant fcmps
llvm-svn: 186758
2013-07-20 04:09:00 +00:00
Jakub Staszak 7b9e0b9f64 ArrayRef ca accept one element. Simplify code a little bit, also it matches now
coding in the other places of the file.

llvm-svn: 176641
2013-03-07 20:01:19 +00:00
Chandler Carruth 0ba8db45c6 Begin fleshing out an interface in TTI for modelling the costs of
generic function calls and intrinsics. This is somewhat overlapping with
an existing intrinsic cost method, but that one seems targetted at
vector intrinsics. I'll merge them or separate their names and use cases
in a separate commit.

This sinks the test of 'callIsSmall' down into TTI where targets can
control it. The whole thing feels very hack-ish to me though. I've left
a FIXME comment about the fundamental design problem this presents. It
isn't yet clear to me what the users of this function *really* care
about. I'll have to do more analysis to figure that out. Putting this
here at least provides it access to proper analysis pass tools and other
such. It also allows us to more cleanly implement the baseline cost
interfaces in TTI.

With this commit, it is now theoretically possible to simplify much of
the inline cost analysis's handling of calls by calling through to this
interface. That conversion will have to happen in subsequent commits as
it requires more extensive restructuring of the inline cost analysis.

The CodeMetrics class is now really only in the business of running over
a block of code and aggregating the metrics on that block of code, with
the actual cost evaluation done entirely in terms of TTI.

llvm-svn: 173148
2013-01-22 11:26:02 +00:00
Chandler Carruth d73bc5fbe2 Sink InlineCost.cpp into IPA -- it is now officially an interprocedural
analysis. How cute that it wasn't previously. ;]

Part of this confusion stems from the flattened header file tree. Thanks
to Benjamin for pointing out the goof on IRC, and we're considering
un-flattening the headers, so speak now if that would bug you.

llvm-svn: 173033
2013-01-21 12:09:41 +00:00