Commit Graph

41 Commits

Author SHA1 Message Date
Geoff Berry fcb186ca9d [EarlyCSE] Change C API pass interface for EarlyCSE w/ MemorySSA
Previous change broke the C API for creating an EarlyCSE pass w/
MemorySSA by adding a bool parameter to control whether MemorySSA was
used or not.  This broke the OCaml bindings.  Instead, change the old C
API entry point back and add a new one to request an EarlyCSE pass with
MemorySSA.

llvm-svn: 280379
2016-09-01 15:07:46 +00:00
Geoff Berry 8d84605f25 [EarlyCSE] Optionally use MemorySSA. NFC.
Summary:
Use MemorySSA, if requested, to do less conservative memory dependency
checking.

This change doesn't enable the MemorySSA enhanced EarlyCSE in the
default pipelines, so should be NFC.

Reviewers: dberlin, sanjoy, reames, majnemer

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D19821

llvm-svn: 280279
2016-08-31 19:24:10 +00:00
David Majnemer cbf614a93b Remove the ScalarReplAggregates pass
Nearly all the changes to this pass have been done while maintaining and
updating other parts of LLVM.  LLVM has had another pass, SROA, which
has superseded ScalarReplAggregates for quite some time.

Differential Revision: http://reviews.llvm.org/D21316

llvm-svn: 272737
2016-06-15 00:19:09 +00:00
Eric Christopher a6b96004b5 Reorganize the C API headers to improve build times.
Type specific declarations have been moved to Type.h and error handling
routines have been moved to ErrorHandling.h. Both are included in Core.h
so nothing should change for projects directly including the headers,
but transitive dependencies may be affected.

llvm-svn: 255965
2015-12-18 01:46:52 +00:00
Hal Finkel 2bb61ba2fe [BDCE] Add a bit-tracking DCE pass
BDCE is a bit-tracking dead code elimination pass. It is based on ADCE (the
"aggressive DCE" pass), with the added capability to track dead bits of integer
valued instructions and remove those instructions when all of the bits are
dead.

Currently, it does not actually do this all-bits-dead removal, but rather
replaces the instruction's uses with a constant zero, and lets instcombine (and
the later run of ADCE) do the rest. Because we essentially get a run of ADCE
"for free" while tracking the dead bits, we also do what ADCE does and removes
actually-dead instructions as well (this includes instructions newly trivially
dead because all bits were dead, but not all such instructions can be removed).

The motivation for this is a case like:

int __attribute__((const)) foo(int i);
int bar(int x) {
  x |= (4 & foo(5));
  x |= (8 & foo(3));
  x |= (16 & foo(2));
  x |= (32 & foo(1));
  x |= (64 & foo(0));
  x |= (128& foo(4));
  return x >> 4;
}

As it turns out, if you order the bit-field insertions so that all of the dead
ones come last, then instcombine will remove them. However, if you pick some
other order (such as the one above), the fact that some of the calls to foo()
are useless is not locally obvious, and we don't remove them (without this
pass).

I did a quick compile-time overhead check using sqlite from the test suite
(Release+Asserts). BDCE took ~0.4% of the compilation time (making it about
twice as expensive as ADCE).

I've not looked at why yet, but we eliminate instructions due to having
all-dead bits in:
External/SPEC/CFP2006/447.dealII/447.dealII
External/SPEC/CINT2006/400.perlbench/400.perlbench
External/SPEC/CINT2006/403.gcc/403.gcc
MultiSource/Applications/ClamAV/clamscan
MultiSource/Benchmarks/7zip/7zip-benchmark

llvm-svn: 229462
2015-02-17 01:36:59 +00:00
Juergen Ributzka 14ae60407d [C API] Make the 'lower switch' pass available via the C API.
llvm-svn: 217630
2014-09-11 21:32:32 +00:00
Hal Finkel d67e463901 Add an AlignmentFromAssumptions Pass
This adds a ScalarEvolution-powered transformation that updates load, store and
memory intrinsic pointer alignments based on invariant((a+q) & b == 0)
expressions. Many of the simple cases we can get with ValueTracking, but we
still need something like this for the more complicated cases (such as those
with an offset) that require some algebra. Note that gcc's
__builtin_assume_aligned's optional third argument provides exactly for this
kind of 'misalignment' offset for which this kind of logic is necessary.

The primary motivation is to fixup alignments for vector loads/stores after
vectorization (and unrolling). This pass is added to the optimization pipeline
just after the SLP vectorizer runs (which, admittedly, does not preserve SE,
although I imagine it could).  Regardless, I actually don't think that the
preservation matters too much in this case: SE computes lazily, and this pass
won't issue any SE queries unless there are any assume intrinsics, so there
should be no real additional cost in the common case (SLP does preserve DT and
LoopInfo).

llvm-svn: 217344
2014-09-07 20:05:11 +00:00
Hal Finkel 9414665a3b Add scoped-noalias metadata
This commit adds scoped noalias metadata. The primary motivations for this
feature are:
  1. To preserve noalias function attribute information when inlining
  2. To provide the ability to model block-scope C99 restrict pointers

Neither of these two abilities are added here, only the necessary
infrastructure. In fact, there should be no change to existing functionality,
only the addition of new features. The logic that converts noalias function
parameters into this metadata during inlining will come in a follow-up commit.

What is added here is the ability to generally specify noalias memory-access
sets. Regarding the metadata, alias-analysis scopes are defined similar to TBAA
nodes:

!scope0 = metadata !{ metadata !"scope of foo()" }
!scope1 = metadata !{ metadata !"scope 1", metadata !scope0 }
!scope2 = metadata !{ metadata !"scope 2", metadata !scope0 }
!scope3 = metadata !{ metadata !"scope 2.1", metadata !scope2 }
!scope4 = metadata !{ metadata !"scope 2.2", metadata !scope2 }

Loads and stores can be tagged with an alias-analysis scope, and also, with a
noalias tag for a specific scope:

... = load %ptr1, !alias.scope !{ !scope1 }
... = load %ptr2, !alias.scope !{ !scope1, !scope2 }, !noalias !{ !scope1 }

When evaluating an aliasing query, if one of the instructions is associated
with an alias.scope id that is identical to the noalias scope associated with
the other instruction, or is a descendant (in the scope hierarchy) of the
noalias scope associated with the other instruction, then the two memory
accesses are assumed not to alias.

Note that is the first element of the scope metadata is a string, then it can
be combined accross functions and translation units. The string can be replaced
by a self-reference to create globally unqiue scope identifiers.

[Note: This overview is slightly stylized, since the metadata nodes really need
to just be numbers (!0 instead of !scope0), and the scope lists are also global
unnamed metadata.]

Existing noalias metadata in a callee is "cloned" for use by the inlined code.
This is necessary because the aliasing scopes are unique to each call site
(because of possible control dependencies on the aliasing properties). For
example, consider a function: foo(noalias a, noalias b) { *a = *b; } that gets
inlined into bar() { ... if (...) foo(a1, b1); ... if (...) foo(a2, b2); } --
now just because we know that a1 does not alias with b1 at the first call site,
and a2 does not alias with b2 at the second call site, we cannot let inlining
these functons have the metadata imply that a1 does not alias with b2.

llvm-svn: 213864
2014-07-24 14:25:39 +00:00
Gerolf Hoflehner f27ae6cdcf MergedLoadStoreMotion pass
Merges equivalent loads on both sides of a hammock/diamond
and hoists into into the header.
Merges equivalent stores on both sides of a hammock/diamond
and sinks it to the footer.
Can enable if conversion and tolerate better load misses
and store operand latencies.

llvm-svn: 213396
2014-07-18 19:13:09 +00:00
Richard Smith 559c8623c5 The LLVM C API shouldn't be including a file from the C++ API. Especially not a
file that it doesn't use.

llvm-svn: 205755
2014-04-08 10:47:04 +00:00
Richard Sandiford 8ee1b77de3 Add a Scalarizer pass.
llvm-svn: 195471
2013-11-22 16:58:05 +00:00
Hal Finkel bf45efde2d Add a loop rerolling pass
This adds a loop rerolling pass: the opposite of (partial) loop unrolling. The
transformation aims to take loops like this:

for (int i = 0; i < 3200; i += 5) {
  a[i]     += alpha * b[i];
  a[i + 1] += alpha * b[i + 1];
  a[i + 2] += alpha * b[i + 2];
  a[i + 3] += alpha * b[i + 3];
  a[i + 4] += alpha * b[i + 4];
}

and turn them into this:

for (int i = 0; i < 3200; ++i) {
  a[i] += alpha * b[i];
}

and loops like this:

for (int i = 0; i < 500; ++i) {
  x[3*i] = foo(0);
  x[3*i+1] = foo(0);
  x[3*i+2] = foo(0);
}

and turn them into this:

for (int i = 0; i < 1500; ++i) {
  x[i] = foo(0);
}

There are two motivations for this transformation:

  1. Code-size reduction (especially relevant, obviously, when compiling for
code size).

  2. Providing greater choice to the loop vectorizer (and generic unroller) to
choose the unrolling factor (and a better ability to vectorize). The loop
vectorizer can take vector lengths and register pressure into account when
choosing an unrolling factor, for example, and a pre-unrolled loop limits that
choice. This is especially problematic if the manual unrolling was optimized
for a machine different from the current target.

The current implementation is limited to single basic-block loops only. The
rerolling recognition should work regardless of how the loop iterations are
intermixed within the loop body (subject to dependency and side-effect
constraints), but the significant restriction is that the order of the
instructions in each iteration must be identical. This seems sufficient to
capture all current use cases.

This pass is not currently enabled by default at any optimization level.

llvm-svn: 194939
2013-11-16 23:59:05 +00:00
Richard Sandiford 37cd6cfba2 Turn MipsOptimizeMathLibCalls into a target-independent scalar transform
...so that it can be used for z too.  Most of the code is the same.
The only real change is to use TargetTransformInfo to test when a sqrt
instruction is available.

The pass is opt-in because at the moment it only handles sqrt.

llvm-svn: 189097
2013-08-23 10:27:02 +00:00
Eric Christopher 04d4e9312c Move C++ code out of the C headers and into either C++ headers
or the C++ files themselves. This enables people to use
just a C compiler to interoperate with LLVM.

llvm-svn: 180063
2013-04-22 22:47:22 +00:00
Benjamin Kramer c86fdf12e8 Rename the C function to create a SLPVectorizerPass to something sane and expose it in the header file.
llvm-svn: 179272
2013-04-11 11:36:36 +00:00
Evan Cheng 2e254d041e Revert r178713
llvm-svn: 178769
2013-04-04 17:40:53 +00:00
Evan Cheng 51a7a9d712 Make it possible to include llvm-c without including C++ headers. Patch by Filip Pizlo.
llvm-svn: 178713
2013-04-03 23:12:39 +00:00
Nick Lewycky 5f50854186 Use LLVMBool instead of 'bool' in the C API. Based on a patch by Peter Zotov!
llvm-svn: 176793
2013-03-10 21:58:22 +00:00
Jakub Staszak 02a3c9b959 Fix include guards so they exactly match file names.
llvm-svn: 172025
2013-01-10 00:45:19 +00:00
Benjamin Kramer a74129adad Symbol hygiene: Make sure declarations and definitions match, make helper functions static.
llvm-svn: 166376
2012-10-20 12:53:26 +00:00
Gregory Szorc 34c863a031 Organize LLVM C API docs into doxygen modules; add docs
This gives a lot of love to the docs for the C API. Like Clang's
documentation, the C API is now organized into a Doxygen "module"
(LLVMC). Each C header file is a child of the main module. Some modules
(like Core) have a hierarchy of there own. The produced documentation is
thus better organized (before everything was in one monolithic list).

This patch also includes a lot of new documentation for APIs in Core.h.
It doesn't document them all, but is better than none. Function docs are
missing @param and @return annotation, but the documentation body now
commonly provides help details (like the expected llvm::Value sub-type
to expect).

llvm-svn: 153157
2012-03-21 03:54:29 +00:00
Hal Finkel 8a3aebe5e0 A few of the changes suggested in code review (by Nick Lewycky)
llvm-svn: 149472
2012-02-01 05:51:45 +00:00
Hal Finkel c34e51132c Add a basic-block autovectorization pass.
This is the initial checkin of the basic-block autovectorization pass along with some supporting vectorization infrastructure.
Special thanks to everyone who helped review this code over the last several months (especially Tobias Grosser).

llvm-svn: 149468
2012-02-01 03:51:43 +00:00
Rafael Espindola 6463cfa36c Add missing file.
llvm-svn: 137162
2011-08-09 22:19:52 +00:00
Bill Wendling 2d3138c112 Remove the LowerSetJmp pass. It wasn't used effectively by any of the targets.
This is some of my original LLVM code. *wipes tear*

llvm-svn: 136821
2011-08-03 22:18:20 +00:00
Rafael Espindola b84dc6bca8 Add LLVMAddAlwaysInlinerPass to the C API.
llvm-svn: 136083
2011-07-26 15:23:23 +00:00
Rafael Espindola be2fe29f9c LLVM 3.0 is here, remove old do nothing method.
llvm-svn: 136082
2011-07-26 15:17:32 +00:00
Rafael Espindola 7281395c8c Add LLVMAddLowerExpectIntrinsicPass to the C API.
llvm-svn: 135966
2011-07-25 20:57:59 +00:00
Chris Lattner b1ed91f397 Land the long talked about "type system rewrite" patch. This
patch brings numerous advantages to LLVM.  One way to look at it
is through diffstat:
 109 files changed, 3005 insertions(+), 5906 deletions(-)

Removing almost 3K lines of code is a good thing.  Other advantages
include:

1. Value::getType() is a simple load that can be CSE'd, not a mutating
   union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
   uniques them.  This means that the compiler doesn't merge them structurally
   which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
   struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
   in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead 
   "const Type *" everywhere.

Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.  
"LLVM 3.0" is the right time to do this.

There are still some cleanups pending after this, this patch is large enough
as-is.

llvm-svn: 134829
2011-07-09 17:41:24 +00:00
Rafael Espindola 6aafb64daf Add the alias analysis to the C api.
llvm-svn: 129447
2011-04-13 15:44:58 +00:00
Rafael Espindola e4e4e37580 Expose more passes to the C API.
llvm-svn: 129087
2011-04-07 18:20:46 +00:00
Chris Lattner db3bc40ade remove dead prototype, PR8351
llvm-svn: 116209
2010-10-11 17:44:22 +00:00
Wesley Peck a2ca3fa781 Adding IPSCCP and Internalize passes to the C-bindings
llvm-svn: 100893
2010-04-09 20:43:20 +00:00
Nate Begeman 2e41605d4f Whoops this already existed.
llvm-svn: 98297
2010-03-11 23:21:19 +00:00
Nate Begeman 5daa235c91 Add a handful of additional useful pass manager things to the C API
llvm-svn: 98296
2010-03-11 23:06:07 +00:00
Chris Lattner 852f2653c4 remove the now dead condprop pass, PR3906.
llvm-svn: 86810
2009-11-11 05:56:35 +00:00
Victor Hernandez e297149e26 Auto-upgrade free instructions to calls to the builtin free function.
Update all analysis passes and transforms to treat free calls just like FreeInst.
Remove RaiseAllocations and all its tests since FreeInst no longer needs to be raised.

llvm-svn: 84987
2009-10-24 04:23:03 +00:00
Chris Lattner a59518276e fix header comment and include guard.
llvm-svn: 66273
2009-03-06 16:54:19 +00:00
Chris Lattner e48f897ca7 add a bunch more passes to the C bindings (PR3734), patch by
Lennart Augustsson!

llvm-svn: 66272
2009-03-06 16:52:18 +00:00
Gordon Henriksen b81777a354 C and Objective Caml bindings for mem2reg and reg2mem.
Patch by Erick Tryzelaar.

llvm-svn: 48602
2008-03-20 17:16:03 +00:00
Gordon Henriksen 82a0e74f43 C and Objective Caml bindings for several scalar transforms.
Patch originally by Erick Tryzelaar, but has been modified somewhat.

llvm-svn: 48419
2008-03-16 16:32:40 +00:00