Commit Graph

192171 Commits

Author SHA1 Message Date
Kostya Serebryany 4b96ce96c6 [fuzzer] update the include line to use the new header name
llvm-svn: 228018
2015-02-03 19:42:05 +00:00
Kostya Serebryany cc0c773f76 [sanitizer] move the coverage interface into a separate header, <sanitizer/coverage_interface.h>. NFC, except for the header name change. This may break existing users, but in this case it's better this way (not too many users so far)
llvm-svn: 228017
2015-02-03 19:40:53 +00:00
Jingyue Wu d7966ff3b9 Add straight-line strength reduction to LLVM
Summary:
Straight-line strength reduction (SLSR) is implemented in GCC but not yet in
LLVM. It has proven to effectively simplify statements derived from an unrolled
loop, and can potentially benefit many other cases too. For example,

LLVM unrolls

  #pragma unroll
  foo (int i = 0; i < 3; ++i) {
    sum += foo((b + i) * s);
  }

into

  sum += foo(b * s);
  sum += foo((b + 1) * s);
  sum += foo((b + 2) * s);

However, no optimizations yet reduce the internal redundancy of the three
expressions:

  b * s
  (b + 1) * s
  (b + 2) * s

With SLSR, LLVM can optimize these three expressions into:

  t1 = b * s
  t2 = t1 + s
  t3 = t2 + s

This commit is only an initial step towards implementing a series of such
optimizations. I will implement more (see TODO in the file commentary) in the
near future. This optimization is enabled for the NVPTX backend for now.
However, I am more than happy to push it to the standard optimization pipeline
after more thorough performance tests.

Test Plan: test/StraightLineStrengthReduce/slsr.ll

Reviewers: eliben, HaoLiu, meheff, hfinkel, jholewinski, atrick

Reviewed By: jholewinski, atrick

Subscribers: karthikthecool, jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D7310

llvm-svn: 228016
2015-02-03 19:37:06 +00:00
Colin LeMahieu e5daf3abfe [Hexagon] Updating XTYPE/PERM intrinsics.
llvm-svn: 228015
2015-02-03 19:36:59 +00:00
Simon Pilgrim 6544f815b3 [X86][AVX2] Enabled shuffle matching for the AVX2 zero extension (128bit -> 256bit) vpmovzx* instructions.
Differential Revision: http://reviews.llvm.org/D7251

llvm-svn: 228014
2015-02-03 19:34:09 +00:00
Ben Langmuir 3b7b540680 Make the default module cache user-specific
Appends the username to the first component (after the temp dir) of the
module cache path.  If the username contains a character that shouldn't
go into a path (for now conservatively allow [a-zA-Z0-9_]), we fallback
to the user id.

llvm-svn: 228013
2015-02-03 19:28:37 +00:00
Rafael Espindola a5ef4905a5 Fix duplicated symbol error.
llvm-svn: 228012
2015-02-03 19:25:53 +00:00
Rafael Espindola 8d911b4419 Fix typo in test/CodeGen/X86/sibcall.ll (pr22331).
llvm-svn: 228011
2015-02-03 19:20:26 +00:00
Colin LeMahieu 99cc7c1070 [Hexagon] Adding missing vector multiply instruction encodings. Converting multiply intrinsics and updating tests.
llvm-svn: 228010
2015-02-03 19:15:11 +00:00
Reid Kleckner e675575d1b thread safety: Add move ctor to BeforeInfo to fix MSVC build
MSVC cannot infer move ctors yet.

llvm-svn: 228009
2015-02-03 19:04:26 +00:00
Sanjay Patel b7d5628784 Merge consecutive 16-byte loads into one 32-byte load (PR22329)
This patch detects consecutive vector loads using the existing 
EltsFromConsecutiveLoads() logic. This fixes:
http://llvm.org/bugs/show_bug.cgi?id=22329

This patch effectively reverts the tablegen additions of D6492 / 
http://reviews.llvm.org/rL224344 ...which in hindsight were a horrible hack.

The test cases that were added with that patch are simply modified to load
from varying offsets of a base pointer. These loads did not match the existing
tablegen patterns.

A happy side effect of doing this optimization earlier is that we can now fold
the load into a math op where possible; this is shown in some of the updated
checks in the test file.

Differential Revision: http://reviews.llvm.org/D7303

llvm-svn: 228006
2015-02-03 18:54:00 +00:00
Greg Fitzgerald 2e18729bce Don't assume LIT_EXECUTABLE points to a Python script, take 2
Before this patch, the CMake build assumed LIT_EXECUTABLE pointed
to a Python script, not an executable.  If you were to pass in an
executable, such as the result of py2exe on lit.py, the build would
fall over.

With this patch, the CMake build assumes LIT_EXECUTABLE is an
executable.  You can continue setting it to lit.py, but it will
now use its shebang to find a Python interpreter.

Differential Revision: http://reviews.llvm.org/D7315

llvm-svn: 228005
2015-02-03 18:47:37 +00:00
Sanjay Patel e63abfe70e remove variable names from comments; NFC
I didn't bother to fix the self-referential definitions and grammar
because my eyes started to bleed.

llvm-svn: 228004
2015-02-03 18:47:32 +00:00
Adrian Prantl 39428e74a0 Merge ArtificialLocation into ApplyDebugLocation and make a clear
distinction between the different use-cases. With the previous default
behavior we would occasionally emit empty debug locations in situations
where they actually were strictly required (= on invoke insns).
We now have a choice between defaulting to an empty location or an
artificial location.

Specifically, this fixes a bug caused by a missing debug location when
emitting C++ EH cleanup blocks from within an artificial function, such as
an ObjC destroy helper function.

rdar://problem/19670595

llvm-svn: 228003
2015-02-03 18:40:42 +00:00
Adrian Prantl 6693d0839a Add documentation to ApplyDebugLocation.
llvm-svn: 228002
2015-02-03 18:40:38 +00:00
Alexey Samsonov 9cea2c1035 [ASan] Run tests with both static and dynamic runtime on Windows by default.
llvm-svn: 228001
2015-02-03 18:40:34 +00:00
Manman Ren 8121e1db91 [LTO API] split lto_codegen_compile to lto_codegen_optimize and
lto_codegen_compile_optimized. Also add lto_api_version.

Before this commit, we can only dump the optimized bitcode after running
lto_codegen_compile, but it includes some impacts of running codegen passes,
one example is StackProtector pass. We will get assertion failure when running
llc on the optimized bitcode, because StackProtector is effectively run twice.

After splitting lto_codegen_compile, the linker can choose to dump the bitcode
before running lto_codegen_compile_optimized.

lto_api_version is added so ld64 can check for runtime-availability of the new
API.

rdar://19565500

llvm-svn: 228000
2015-02-03 18:39:15 +00:00
Hans Wennborg 9148e1721c Fix ProgramFiles path for 64-bit Windows installer
If we are building an 64bit installer on Windows we have to adjust the
Program Files path otherwise it uses the wrong Program Files (x86)
directory. Related CMake bug report
http://public.kitware.com/Bug/view.php?id=14211

Patch by Ismail Dönmez!

llvm-svn: 227999
2015-02-03 18:31:29 +00:00
Zachary Turner 02624e4154 Fix compilation failure on Windows.
llvm-svn: 227998
2015-02-03 18:26:00 +00:00
DeLesley Hutchins 4980df623f Thread Safety Analysis: add support for before/after annotations on mutexes.
These checks detect potential deadlocks caused by inconsistent lock
ordering.  The checks are implemented under the -Wthread-safety-beta flag.

llvm-svn: 227997
2015-02-03 18:17:48 +00:00
Greg Fitzgerald 57775cd66f Revert "Don't assume LIT_EXECUTABLE points to a Python script"
This reverts r227994

llvm-svn: 227996
2015-02-03 18:16:47 +00:00
Colin LeMahieu a6632452be [Hexagon] Converting complex number intrinsics and adding tests.
llvm-svn: 227995
2015-02-03 18:16:28 +00:00
Greg Fitzgerald d9ecf1ae7c Don't assume LIT_EXECUTABLE points to a Python script
Before this patch, the CMake build assumed LIT_EXECUTABLE pointed
to a Python script, not an executable.  If you were to pass in an
executable, such as the result of py2exe on lit.py, the build would
fall over.

With this patch, the CMake build assumes LIT_EXECUTABLE is an
executable.  You can continue setting it to lit.py, but it will
now use its shebang to find a Python interpreter.

Differential Revision: http://reviews.llvm.org/D7315

llvm-svn: 227994
2015-02-03 18:02:04 +00:00
Colin LeMahieu cdba4e1bcc [Hexagon] Adding vector intrinsics for alu32/alu and xtype/alu.
llvm-svn: 227993
2015-02-03 18:01:45 +00:00
Adam Nemet b60295a525 [LoopVectorize] Fix rebase glitch in r227751
LoopVectorizationLegality::{getNumLoads,getNumStores} should forward to
LoopAccessAnalysis now.

Thanks to Takumi for noticing this!

llvm-svn: 227992
2015-02-03 17:59:53 +00:00
Jingyue Wu 5bbcdaa8d9 Remove usernames from TODOs, NFC
making the style consistent with the rest

llvm-svn: 227991
2015-02-03 17:57:38 +00:00
Marek Olsak 191507e0b7 R600/SI: Don't generate non-existent LSHL, LSHR, ASHR B32 variants on VI
This can happen when a REV instruction is commuted.

The trick is not to define the _vi versions of instructions, which has these
consequences:
- code generation will always fail if a pseudo cannot be lowered
  (very useful to catch bugs where an unsupported instruction somehow makes
   it to the printer)
- ability to query if a pseudo can be lowered, which is done in commuteOpcode
  to prevent REV from commuting to non-REV on VI

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227990
2015-02-03 17:38:12 +00:00
Marek Olsak 7585a29bd4 R600/SI: Remove VOP2_REV definitions from target-specific instructions
The getCommute* functions are only used with pseudos, so this commit doesn't
change anything.

The issue with missing non-rev versions of shift instructions on VI will fixed
separately.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227989
2015-02-03 17:38:05 +00:00
Marek Olsak 11057ee022 R600/SI: Trivial instruction definition corrections for VI (v2)
- V_MAC_LEGACY_F32 exists on VI, but it's VOP3-only.

- Define CVT_PK opcodes which are different between SI and VI. These are
  unused. The idea is to define all chip differences.

v2: keep V_MUL_LO_U32

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227988
2015-02-03 17:38:01 +00:00
Marek Olsak 3db6ba8cfa R600/SI: Determine target-specific encoding of READLANE and WRITELANE early v2
These are VOP2 on SI and VOP3 on VI, and their pseudos are neither, which can
be a problem. In order to make isVOP2 and isVOP3 queries behave as expected,
the encoding must be determined first.

This doesn't fix any known issue, but better safe than sorry.

v2: add and use getMCOpcodeFromPseudo

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227987
2015-02-03 17:37:57 +00:00
Marek Olsak 1bd2463548 R600/SI: Fix dependency between instruction writing M0 and S_SENDMSG on VI (v2)
This fixes a hang when using an empty geometry shader.

v2: - don't add s_nop when followed by s_waitcnt
    - comestic changes

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227986
2015-02-03 17:37:52 +00:00
Sanjay Patel ffd039bde1 Fix program crashes due to alignment exceptions generated for SSE memop instructions (PR22371).
r224330 introduced a bug by misinterpreting the "FeatureVectorUAMem" bit.
The commit log says that change did not affect anything, but that's not correct.
That change allowed SSE instructions to have unaligned mem operands folded into
math ops, and that's not allowed in the default specification for any SSE variant. 

The bug is exposed when compiling for an AVX-capable CPU that had this feature
flag but without enabling AVX codegen. Another mistake in r224330 was not adding
the feature flag to all AVX CPUs; the AMD chips were excluded.

This is part of the fix for PR22371 ( http://llvm.org/bugs/show_bug.cgi?id=22371 ).

This feature bit is SSE-specific, so I've renamed it to "FeatureSSEUnalignedMem".
Changed the existing test case for the feature bit to reflect the new name and
renamed the test file itself to better reflect the feature.
Added runs to fold-vex.ll to check for the failing codegen.

Note that the feature bit is not set by default on any CPU because it may require a
configuration register setting to enable the enhanced unaligned behavior.

llvm-svn: 227983
2015-02-03 17:13:04 +00:00
Nico Weber b14f872269 Implement jump scope SEHmantic analysis.
Thou shall not jump into SEH blocks. Jumping out of SEH __try and __excepts
is A-ok. Jumping out of __finally blocks is B-ok (msvc doesn't error about it,
but warns that it has undefined behavior).

I've checked that clang's behavior with this patch matches msvc's behavior.
We don't have the warning on jumping out of a __finally yet, see the FIXME
in the test. clang also currently crashes on codegen for a jump out of a
__finally block, see PR22414 comment 7.

I also added a few tests for the interaction of indirect jumps and SEH blocks.
MSVC doesn't support indirect jumps, so there's no way to know if clang behave
the same way as msvc here.  clang's behavior with this patch does make sense
to me, but maybe it could be argued that it should be more permissive (see
FIXME in the indirect jump tests -- shout if you have an opinion on this).

llvm-svn: 227982
2015-02-03 17:06:08 +00:00
Bill Schmidt e2062dbb29 Disable 32-bit tests in tls-pic.ll until they can be repaired
llvm-svn: 227981
2015-02-03 16:57:38 +00:00
Bill Schmidt a5908c74e6 Further revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
llvm-svn: 227980
2015-02-03 16:33:55 +00:00
Rafael Espindola 3e34e6546e Use CLANG_LIBDIR_SUFFIX when looking for the gold plugin.
Patch by İsmail Dönmez!

llvm-svn: 227979
2015-02-03 16:33:53 +00:00
Bill Schmidt c2208acfbe Further revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
llvm-svn: 227978
2015-02-03 16:29:52 +00:00
Bill Schmidt 50f486b447 Revise too-restrictive test CodeGen/PowerPC/tls-pic.ll
llvm-svn: 227977
2015-02-03 16:24:05 +00:00
Bill Schmidt 685aa8b0c5 [PowerPC] Yet another approach to __tls_get_addr
This patch is a third attempt to properly handle the local-dynamic and
global-dynamic TLS models.

In my original implementation, calls to __tls_get_addr were hidden
from view until the asm-printer phase, at which point the underlying
branch-and-link instruction was created with proper relocations.  This
mostly worked well, but I used some repellent techniques to ensure
that the TLS_GET_ADDR nodes at the SD and MI levels correctly received
input from GPR3 and produced output into GPR3.  This proved to work
badly in the presence of multiple TLS variable accesses, with the
copies to and from GPR3 being scheduled incorrectly and generally
creating havoc.

In r221703, I addressed that problem by representing the calls to
__tls_get_addr as true calls during instruction lowering.  This had
the advantage of removing all of the bad hacks and relying on the
existing call machinery to properly glue the copies in place. It
looked like this was going to be the right way to go.

However, as a side effect of the recent discovery of problems with
linker optimizations for TLS, we discovered cases of suboptimal code
generation with this strategy.  The problem comes when tls_get_addr is
called for the same address, and there is a resulting CSE
opportunity.  It turns out that in such cases MachineCSE will common
the addis/addi instructions that set up the input value to
tls_get_addr, but will not common the calls themselves.  MachineCSE
does not have any machinery to common idempotent calls.  This is
perfectly sensible, since presumably this would be done at the IR
level, and introducing calls in the back end isn't commonplace.  In
any case, we end up with two calls to __tls_get_addr when one would
suffice, and that isn't good.

I presumed that the original design would have allowed commoning of
the machine-specific nodes that hid the __tls_get_addr calls, so as
suggested by Ulrich Weigand, I went back to that design and cleaned it
up so that the copies were properly held together by glue
nodes.  However, it turned out that this didn't work either...the
presence of copies to physical registers kept the machine-specific
nodes from being commoned also.

All of which leads to the design presented here.  This is a return to
the original design, except that no attempt is made to introduce
copies to and from GPR3 during instruction lowering.  Virtual registers
are used until prior to register allocation.  At that point, a special
pass is run that identifies the machine-specific nodes that hide the
tls_get_addr calls and introduces the copies to and from GPR3 around
them.  The register allocator then coalesces these copies away.  With
this design, MachineCSE succeeds in commoning tls_get_addr calls where
possible, and we get nice optimal code generation (better than GCC at
the moment, which does not common these calls).

One additional problem must be dealt with:  After introducing the
mentions of the physical register GPR3, the aggressive anti-dependence
breaker sees opportunities to improve scheduling by selecting a
different register instead.  Flags must be used on the instruction
descriptions to tell the anti-dependence breaker to keep its hands in
its pockets.

One thing missing from the original design was recording a definition
of the link register on the GET_TLS_ADDR nodes.  Doing this was found
to be insufficient to force a stack frame to be created, which led to
looping behavior because two different LR values were stored at the
same address.  This appears to have been an oversight in
PPCFrameLowering::determineFrameLayout(), which is repaired here.

Because MustSaveLR() returns true for calls to builtin_return_address,
this changed the expected behavior of
test/CodeGen/PowerPC/retaddr2.ll, which now stacks a frame but
formerly did not.  I've fixed the test case to reflect this.

There are existing TLS tests to catch regressions; the checks in
test/CodeGen/PowerPC/tls-store2.ll proved to be too restrictive in the
face of instruction scheduling with these changes, so I fixed that
up.

I've added a new test case based on the PrettyStackTrace module that
demonstrated the original problem. This checks that we get correct
code generation and that CSE of the calls to __get_tls_addr has taken
place.

llvm-svn: 227976
2015-02-03 16:16:01 +00:00
Rafael Auler ee95c3b5c5 Make ELFLinkingContext own LinkerScript buffers
Currently, no one owns script::Parser buffers, but yet ELFLinkingContext gets
updated with StringRef pointers to data inside Parser buffers. Since this buffer
is locally owned inside GnuLdDriver::evalLinkerScript(), as soon as this
function finishes, all pointers in ELFLinkingContext that comes from linker
scripts get invalid. The problem is that we need someone to own linker scripts
data structures and, since ELFLinkingContext transports references to linker
scripts data, we can simply make it also own all linker scripts data.

Differential Revision: http://reviews.llvm.org/D7323

llvm-svn: 227975
2015-02-03 16:08:57 +00:00
Eric Fiselier 0285fc869d Mark <experimental/system_error> as complete
llvm-svn: 227974
2015-02-03 16:04:45 +00:00
Eric Fiselier 7bffc89cb9 [libcxx] Add <experimental/system_error>
Summary:
This patch just adds the variable templates in <experimental/system_error>.

see: https://rawgit.com/cplusplus/fundamentals-ts/v1/fundamentals-ts.html#syserror


Reviewers: jroelofs, danalbert, K-ballo, mclow.lists

Reviewed By: mclow.lists

Subscribers: chandlerc, cfe-commits

Differential Revision: http://reviews.llvm.org/D7353

llvm-svn: 227973
2015-02-03 16:03:24 +00:00
Sanjay Patel a4276fb294 Improve test to actually check for a folded load.
This test was checking for lack of a "movaps" (an aligned load)
rather than a "movups" (an unaligned load). It also included
a store which complicated the checking.

Add specific CPU runs to prevent subtarget feature flag overrides
from inhibiting this optimization.

llvm-svn: 227972
2015-02-03 15:37:18 +00:00
Jonathan Roelofs 07d9d76a2d Revert: Revert r227804: Use fseek/ftell instead of fseeko/ftello when Newlib is the libc
EricWF has updated the compilers on his buildbots. Hopefully they won't crash now.

llvm-svn: 227971
2015-02-03 15:34:17 +00:00
Tobias Grosser eb29c68df2 Add test case for r227805
llvm-svn: 227970
2015-02-03 15:11:02 +00:00
Bruno Cardoso Lopes 077774b820 [X86][MMX] Improve transfer from mmx to i32
Improve EXTRACT_VECTOR_ELT DAG combine to catch conversion patterns
between x86mmx and i32 with more layers of indirection.

Before:
  movq2dq %mm0, %xmm0
  movd %xmm0, %eax
After:
  movd %mm0, %eax

llvm-svn: 227969
2015-02-03 14:46:49 +00:00
Alexander Potapenko 270000c194 [ASan] Remove ifdefs for MAC_OS_X_VERSION_10_6, as ASan assumes OSX >= 10.6
llvm-svn: 227968
2015-02-03 12:47:15 +00:00
Alexander Potapenko f203ba35f1 [ASan] Add __asan_ prefix for "mz_*" allocation/deallocation functions
and make them global so that they're not removed by `strip -x`.

llvm-svn: 227967
2015-02-03 12:38:10 +00:00
Renato Golin af1f7d759f Enabling testing ASAN on AArch64
Also, disabling BuiltinLongJmpTest, as it fails for ARM and PPC as well.

Patch by Christophe Lyon.

llvm-svn: 227966
2015-02-03 11:26:52 +00:00
Renato Golin af213728cc Adding AArch64 support to ASan instrumentation
For the time being, it is still hardcoded to support only the 39 VA bits
variant, I plan to work on supporting 42 and 48 VA bits variants, but I
don't have access to such hardware at the moment.

Patch by Chrystophe Lyon.

llvm-svn: 227965
2015-02-03 11:20:45 +00:00