Commit Graph

10073 Commits

Author SHA1 Message Date
Kostya Serebryany bd016bb614 [asan] while generating the description of a global variable, emit the module name in a separate field, thus not duplicating this information if every description. This decreases the binary size (observed up to 3%). https://code.google.com/p/address-sanitizer/issues/detail?id=168 . This changes the asan API version. llvm-part
llvm-svn: 177254
2013-03-18 08:05:29 +00:00
Kostya Serebryany 6b5b58deeb [asan] don't instrument functions with available_externally linkage. This saves a bit of compile time and reduces the number of redundant global strings generated by asan (https://code.google.com/p/address-sanitizer/issues/detail?id=167)
llvm-svn: 177250
2013-03-18 07:33:49 +00:00
Arnold Schwaighofer c63cf3a0ae LoopVectorize: Invert case when we use a vector cmp value to query select cost
We generate a select with a vectorized condition argument when the condition is
NOT loop invariant. Not the other way around.

llvm-svn: 177098
2013-03-14 18:54:36 +00:00
Shuxin Yang 2eca602f8b Perform factorization as a last resort of unsafe fadd/fsub simplification.
Rules include:
  1)1 x*y +/- x*z => x*(y +/- z) 
    (the order of operands dosen't matter)

  2) y/x +/- z/x => (y +/- z)/x 

 The transformation is disabled if the new add/sub expr "y +/- z" is a 
denormal/naz/inifinity.

rdar://12911472

llvm-svn: 177088
2013-03-14 18:08:26 +00:00
Alexey Samsonov 819eddc3ce [ASan] emit instrumentation for initialization order checking by default
llvm-svn: 177063
2013-03-14 12:38:58 +00:00
Chandler Carruth a1c54bbe34 PR14972: SROA vs. GVN exposed a really bad bug in SROA.
The fundamental problem is that SROA didn't allow for overly wide loads
where the bits past the end of the alloca were masked away and the load
was sufficiently aligned to ensure there is no risk of page fault, or
other trapping behavior. With such widened loads, SROA would delete the
load entirely rather than clamping it to the size of the alloca in order
to allow mem2reg to fire. This was exposed by a test case that neatly
arranged for GVN to run first, widening certain loads, followed by an
inline step, and then SROA which miscompiles the code. However, I see no
reason why this hasn't been plaguing us in other contexts. It seems
deeply broken.

Diagnosing all of the above took all of 10 minutes of debugging. The
really annoying aspect is that fixing this completely breaks the pass.
;] There was an implicit reliance on the fact that no loads or stores
extended past the alloca once we decided to rewrite them in the final
stage of SROA. This was used to encode information about whether the
loads and stores had been split across multiple partitions of the
original alloca. That required threading explicit tracking of whether
a *use* of a partition is split across multiple partitions.

Once that was done, another problem arose: we allowed splitting of
integer loads and stores iff they were loads and stores to the entire
alloca. This is a really arbitrary limitation, and splitting at least
some integer loads and stores is crucial to maximize promotion
opportunities. My first attempt was to start removing the restriction
entirely, but currently that does Very Bad Things by causing *many*
common alloca patterns to be fully decomposed into i8 operations and
lots of or-ing together to produce larger integers on demand. The code
bloat is terrifying. That is still the right end-goal, but substantial
work must be done to either merge partitions or ensure that small i8
values are eagerly merged in some other pass. Sadly, figuring all this
out took essentially all the time and effort here.

So the end result is that we allow splitting only when the load or store
at least covers the alloca. That ensures widened loads and stores don't
hurt SROA, and that we don't rampantly decompose operations more than we
have previously.

All of this was already fairly well tested, and so I've just updated the
tests to cover the wide load behavior. I can add a test that crafts the
pass ordering magic which caused the original PR, but that seems really
brittle and to provide little benefit. The fundamental problem is that
widened loads should Just Work.

llvm-svn: 177055
2013-03-14 11:32:24 +00:00
Nick Lewycky 307a1d03b5 Remove accidentally committed debug line.
llvm-svn: 177005
2013-03-14 05:19:12 +00:00
Nick Lewycky fdfed3e9c9 Refactor GCOV's six constructor arguments into a struct with a getter that
constructs default arguments. It can now take default arguments from
cl::opt'ions. Add a new -default-gcov-version=... option, and actually test it!

Sink the reverse-order of the version into GCOVProfiling, hiding it from our
users.

llvm-svn: 177002
2013-03-14 05:13:26 +00:00
Nick Lewycky ad145509eb No functionality change. Rename emitGCNO() to the more sensible
emitProfileNotes(), similar to emitProfileArcs(). Also update its comment.

Also add a comment on Version[4] (there will be another comment in clang later),
and compress lines that exceeded 80 columns.

llvm-svn: 176994
2013-03-13 22:55:42 +00:00
Arnaud A. de Grandmaison 7153305b92 Fix a performance regression when combining to smaller types in icmp (shl %v, C1), C2 :
Only combine when the shl is only used by the icmp

llvm-svn: 176950
2013-03-13 14:40:37 +00:00
Dan Gohman 00253592c7 Change the order of the operands in patchAndReplaceAllUsesWith so
that they're more consistent with Value::replaceAllUsesWith.

llvm-svn: 176872
2013-03-12 16:22:56 +00:00
Meador Inge 20255ef24d LibCallSimplifier: optimize speed for short-lived instances
Nadav reported a performance regression due to the work I did to
merge the library call simplifier into instcombine [1].  The issue
is that a new LibCallSimplifier object is being created whenever
InstCombiner::runOnFunction is called.  Every time a LibCallSimplifier
object is used to optimize a call it creates a hash table to map from
a function name to an object that optimizes functions of that name.
For short-lived LibCallSimplifier instances this is quite inefficient.
Especially for cases where no calls are actually simplified.

This patch fixes the issue by dropping the hash table and implementing
an explicit lookup function to correlate the function name to the object
that optimizes functions of that name.  This avoids the cost of always
building and destroying the hash table in cases where the LibCallSimplifier
object is short-lived and avoids the cost of building the table when no
simplifications are actually preformed.

On a benchmark containing 100,000 calls where none of them are simplified
I noticed a 30% speedup.  On a benchmark containing 100,000 calls where
all of them are simplified I noticed an 8% speedup.

[1] http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130304/167639.html

llvm-svn: 176840
2013-03-12 00:08:29 +00:00
Bill Wendling 9534d8885f Don't remove a landing pad if the invoke requires a table entry.
An invoke may require a table entry. For instance, when the function it calls
is expected to throw.
<rdar://problem/13360379>

llvm-svn: 176827
2013-03-11 20:53:00 +00:00
Nick Lewycky 5f50854186 Use LLVMBool instead of 'bool' in the C API. Based on a patch by Peter Zotov!
llvm-svn: 176793
2013-03-10 21:58:22 +00:00
Hal Finkel f610be9f36 BBVectorize: Fixup debugging statements
After the recent data-structure improvements, a couple of debugging statements
were broken (printing pointer values).

llvm-svn: 176791
2013-03-10 20:57:42 +00:00
Benjamin Kramer 6eda79f69a Remove a source of nondeterminism from the LoopVectorizer.
This made us emit runtime checks in a random order. Hopefully bootstrap
miscompares will go away now.

llvm-svn: 176775
2013-03-09 19:22:40 +00:00
Arnold Schwaighofer 8b3dc09400 LoopVectorizer: Ignore all dbg intrinisic
Ignore all DbgIntriniscInfo instructions instead of just DbgValueInst.

llvm-svn: 176769
2013-03-09 16:27:27 +00:00
Arnold Schwaighofer 4090b61ac3 LoopVectorizer: Ignore dbg.value instructions
We want vectorization to happen at -g. Ignore calls to the dbg.value intrinsic
and don't transfer them to the vectorized code.

radar://13378964

llvm-svn: 176768
2013-03-09 15:56:34 +00:00
Jakub Staszak 2ef36b633b Simplify code. No functionality change.
llvm-svn: 176765
2013-03-09 11:18:59 +00:00
Nick Lewycky 291df6ec42 Use the correct index variable. This is the meat of what was supposed to be in
r176751. Also, learn a lesson about applying patches by hand/eyeball.

llvm-svn: 176764
2013-03-09 10:13:26 +00:00
Nick Lewycky 03aed11cdb Fix bug introduced in r176616 when making function identifier numbers stable.
Count the subprograms, not the compile units.

llvm-svn: 176751
2013-03-09 02:06:37 +00:00
Nick Lewycky 88f1d0d64e Don't emit the extra checksum into the .gcda file if the user hasn't asked for
it. Fortunately, versions of gcov that predate the extra checksum also ignore
any extra data, so this isn't a problem. There will be a matching commit in
compiler-rt.

llvm-svn: 176745
2013-03-09 01:33:06 +00:00
Benjamin Kramer 37c2d65c5a Insert the reduction start value into the first bypass block to preserve domination.
Fixes PR15344.

llvm-svn: 176701
2013-03-08 16:58:37 +00:00
Jakub Staszak fd56611b49 Keep coding stanard.
llvm-svn: 176661
2013-03-07 22:20:06 +00:00
Jakub Staszak db4579d796 Don't create IRBuilder if we can return from the method earlier.
llvm-svn: 176660
2013-03-07 22:10:33 +00:00
Pekka Jaaskelainen 093cf41e86 Fixed a crash when cloning a function into a function with
different size argument list and without attributes in the
arguments.

llvm-svn: 176632
2013-03-07 16:46:43 +00:00
Nick Lewycky 492afe8127 Switch from a version 4.2/4.4 switch to a four-byte version string to be put
into the actual gcov file.

Instead of using the bottom 4 bytes as the function identifier, use a counter.
This makes the identifier numbers stable across multiple runs.

llvm-svn: 176616
2013-03-07 08:28:49 +00:00
Andrew Trick a0a5ca06b9 SimplifyCFG fix for volatile load/store.
Fixes rdar:13349374.

Volatile loads and stores need to be preserved even if the language
standard says they are undefined. "volatile" in this context means "get
out of the way compiler, let my platform handle it".

Additionally, this is the only way I know of with llvm to write to the
first page (when hardware allows) without dropping to assembly.

llvm-svn: 176599
2013-03-07 01:03:35 +00:00
Andrew Trick fcb37243f9 Generalize my previous fix for -print-options.
Always print options that differ from their implicit default. At least
for simple option types.

llvm-svn: 176572
2013-03-06 19:04:56 +00:00
Andrew Trick 946c2b32e6 Give -loop-vectorize an explicit default.
This way, clang -mllvm -print-options shows that the driver is overriding it.

llvm-svn: 176569
2013-03-06 18:22:22 +00:00
Jim Grosbach 95d2eb95c3 InstCombine: Don't shrink allocas when combining with a bitcast.
When considering folding a bitcast of an alloca into the alloca itself,
make sure we don't shrink the amount of memory being allocated, or
things rapidly go sideways.

rdar://13324424

llvm-svn: 176547
2013-03-06 05:44:53 +00:00
Lang Hames 30be8a30cc Check isDiscardableIfUnused, rather than hasLocalLinkage, when bumping
GlobalValue linkage up to ExternalLinkage in the ExtractGV pass. This
prevents linkonce and linkonce_odr symbols from being DCE'd.

llvm-svn: 176459
2013-03-04 22:40:44 +00:00
Preston Gurd 485296d1e8 Bypass Slow Divides
* Only apply divide bypass optimization when not optimizing for size. 
* Fixed bug caused by constant for 0 value of type Int32,
  used dividend type to generate the constant instead.
* For atom x86-64 apply the divide bypass to use 16-bit divides instead of
  64-bit divides when operand values are small enough.
* Added lit tests for 64-bit divide bypass.

Patch by Tyler Nowicki!

llvm-svn: 176442
2013-03-04 18:13:57 +00:00
Nadav Rotem 739e37a0d2 PR14448 - prevent the loop vectorizer from vectorizing the same loop twice.
The LoopVectorizer often runs multiple times on the same function due to inlining.
When this happens the loop vectorizer often vectorizes the same loops multiple times, increasing code size and adding unneeded branches.
With this patch, the vectorizer during vectorization puts metadata on scalar loops and marks them as 'already vectorized' so that it knows to ignore them when it sees them a second time.

PR14448.

llvm-svn: 176399
2013-03-02 01:33:49 +00:00
Peter Collingbourne 1b97a9c82a Modify {Call,Invoke}Inst::addAttribute to take an AttrKind.
llvm-svn: 176397
2013-03-02 01:20:18 +00:00
Benjamin Kramer 12f98fae98 LoopVectorize: Don't hang forever if a PHI only has skipped PHI uses.
Fixes PR15384.

llvm-svn: 176366
2013-03-01 19:07:31 +00:00
Quentin Colombet e684a6d4aa Fix a bug in instcombine for fmul in fast math mode.
The instcombine recognized pattern looks like:
a = b * c
d = a +/- Cst
or
a = b * c
d = Cst +/- a

When creating the new operands for fadd or fsub instruction following the related fmul, the first operand was created with the second original operand (M0 was created with C1) and the second with the first (M1 with Opnd0).

The fix consists in creating the new operands with the appropriate original operand, i.e., M0 with Opnd0 and M1 with C1.

llvm-svn: 176300
2013-02-28 21:12:40 +00:00
Evgeniy Stepanov 00062b4498 [msan] Implement sanitize_memory attribute.
Shadow checks are disabled and memory loads always produce fully initialized
values in functions that don't have a sanitize_memory attribute. Value and
argument shadow is propagated as usual.

This change also updates blacklist behaviour to match the above.

llvm-svn: 176247
2013-02-28 11:25:14 +00:00
Evgeniy Stepanov 4c9300e630 Remove unused leftover declarations.
llvm-svn: 176240
2013-02-28 08:42:11 +00:00
Benjamin Kramer dc145816fd LoopVectorize: Vectorize math builtin calls.
This properly asks TargetLibraryInfo if a call is available and if it is, it
can be translated into the corresponding LLVM builtin. We don't vectorize sqrt()
yet because I'm not sure about the semantics for negative numbers. The other
intrinsic should be exact equivalents to the libm functions.

Differential Revision: http://llvm-reviews.chandlerc.com/D465

llvm-svn: 176188
2013-02-27 15:24:19 +00:00
Nick Lewycky 6fd43e4071 In GCC 4.7, function names are now forbidden from .gcda files. Support this by
passing a null pointer to the function name in to GCDAProfiling, and add another
switch onto GCOVProfiling.

llvm-svn: 176173
2013-02-27 06:22:56 +00:00
Nick Lewycky 625f395663 Doh, fix behaviour change introduced in r176168 which is tested in clang,
not llvm.

llvm-svn: 176172
2013-02-27 06:21:30 +00:00
Nadav Rotem 464e807d41 For each function that we optimize we initialize a new list of lib functions. For each function name we malloc memory. This patch changes the Libcall map to use BumpPtrAllocator. Now we malloc only once. This speeds up instcombine by a few % on a large c++ program.
llvm-svn: 176170
2013-02-27 05:53:43 +00:00
Nick Lewycky 8e94d80aab IRBuilder has grown all sorts of useful utility functions. Make use of them to
clean up this code a tiny bit. No functionality change.

llvm-svn: 176168
2013-02-27 05:46:30 +00:00
Pedro Artigas e40467b589 Enhance integer division emulation support to handle types smaller than 32 bits,
enhancement done the trivial way; by extending inputs and truncating outputs 
which is addequate for targets with little or no support for integer arithmetic
on integer types less than 32 bits.

llvm-svn: 176139
2013-02-26 23:33:20 +00:00
Kostya Serebryany cf880b9443 Unify clang/llvm attributes for asan/tsan/msan (LLVM part)
These are two related changes (one in llvm, one in clang).
LLVM: 
- rename address_safety => sanitize_address (the enum value is the same, so we preserve binary compatibility with old bitcode)
- rename thread_safety => sanitize_thread
- rename no_uninitialized_checks -> sanitize_memory

CLANG: 
- add __attribute__((no_sanitize_address)) as a synonym for __attribute__((no_address_safety_analysis))
- add __attribute__((no_sanitize_thread))
- add __attribute__((no_sanitize_memory))

for S in address thread memory
If -fsanitize=S is present and __attribute__((no_sanitize_S)) is not
set llvm attribute sanitize_S

llvm-svn: 176075
2013-02-26 06:58:09 +00:00
Benjamin Kramer ee40b9a2d4 CVP: If we have a PHI with an incoming select, try to skip the select.
This is a common pattern with dyn_cast and similar constructs, when the
PHI no longer depends on the select it can often be turned into a simpler
construct or even get hoisted out of the loop.

PR15340.

llvm-svn: 175995
2013-02-24 15:34:43 +00:00
Michael Gottesman f4b7761ed7 Fixed a careless mistake.
rdar://13273675.

llvm-svn: 175939
2013-02-23 00:31:32 +00:00
Bill Wendling 09bd1f71ee Implement the NoBuiltin attribute.
The 'nobuiltin' attribute is applied to call sites to indicate that LLVM should
not treat the callee function as a built-in function. I.e., it shouldn't try to
replace that function with different code.

llvm-svn: 175835
2013-02-22 00:12:35 +00:00
Renato Golin cf928cb53f Allow GlobalValues to vectorize with AliasAnalysis
Storing the load/store instructions with the values
and inspect them using Alias Analysis to make sure
they don't alias, since the GEP pointer operand doesn't
take the offset into account.

Trying hard to not add any extra cost to loads and stores
that don't overlap on global values, AA is *only* calculated
if all of the previous attempts failed.

Using biggest vector register size as the stride for the
vectorization access, as we're being conservative and
the cost model (which calculates the real vectorization
factor) is only run after the legalization phase.

We might re-think this relationship in the future, but
for now, I'd rather be safe than sorry.

llvm-svn: 175818
2013-02-21 22:39:03 +00:00