Commit Graph

102437 Commits

Author SHA1 Message Date
Chad Rosier 9149acb053 [ARM64] Ports the Cortex-A53 Machine Model description from AArch64.
Summary:
This port includes the rudimentary latencies that were provided for
the Cortex-A53 Machine Model in the AArch64 backend. It also changes
the SchedAlias for COPY in the Cyclone model to an explicit
WriteRes mapping to avoid conflicts in other subtargets.

Differential Revision: http://reviews.llvm.org/D3427
Patch by Dave Estes <cestes@codeaurora.org>!

llvm-svn: 206652
2014-04-18 21:22:04 +00:00
Reid Kleckner 0177e18c51 Remove -simplify-libcalls pass form Passes documentation
This pass was removed in r184459.

Also added note that the InstCombine pass does library call
simplification.

Patch slightly modified from one by Daniel Liew
<daniel.liew@imperial.ac.uk>!

llvm-svn: 206650
2014-04-18 21:19:06 +00:00
Yaron Keren d0d38bf91e Expanded test for x86-pc-windows-gnu and x86_64-pc-windows-gnu environments.
llvm-svn: 206649
2014-04-18 21:10:11 +00:00
Chandler Carruth 2174f44f61 [LCG] Fix the bugs that Ben pointed out in code review (and the MSan bot
caught). Sad that we don't have warnings for these things, but bleh, no
idea how to fix that.

llvm-svn: 206646
2014-04-18 20:44:16 +00:00
Justin Bogner 12d6c3b4d7 OnDiskHashTable: Expect the Info type to declare the offset type
This changes the on-disk hash to get the type to use for offsets from
the Info type, so that clients can be more flexible with the size of
table they support.

llvm-svn: 206643
2014-04-18 20:39:46 +00:00
Justin Bogner 8b56488749 OnDiskHashTable: Expect the Info type to declare the hash size
This changes the on-disk hash to get the size of a hash value from the
Info type, so that clients can be more flexible with the types of hash
they use.

llvm-svn: 206642
2014-04-18 20:39:43 +00:00
Alexey Samsonov 84e2423d34 [DWARF parser] Respect address ranges specified in compile unit DIE.
When address ranges for compile unit are specified in compile unit DIE
itself, there is no need to collect ranges from children subprogram DIEs.

This change speeds up llvm-symbolizer on Clang-produced binaries with
full debug info. For instance, symbolizing a first address in a 1Gb binary
is now 2x faster (1s vs. 2s).

llvm-svn: 206641
2014-04-18 20:30:27 +00:00
Benjamin Kramer 147644d400 Remove a couple of redundant copies of SmallVector::operator==.
No functionality change.

llvm-svn: 206635
2014-04-18 19:48:03 +00:00
Adam Nemet ee7a3e38c9 [X86] Improve buildFromShuffleMostly for AVX
For a 256-bit BUILD_VECTOR consisting mostly of shuffles of 256-bit vectors,
both the BUILD_VECTOR and its operands may need to be legalized in multiple
steps.  Consider:

(v8f32 (BUILD_VECTOR (extract_vector_elt (v8f32 %vreg0,) Constant<1>),
                     (extract_vector_elt %vreg0, Constant<2>),
                     (extract_vector_elt %vreg0, Constant<3>),
                     (extract_vector_elt %vreg0, Constant<4>),
                     (extract_vector_elt %vreg0, Constant<5>),
                     (extract_vector_elt %vreg0, Constant<6>),
                     (extract_vector_elt %vreg0, Constant<7>),
                     %vreg1))

a. We can't build a 256-bit vector efficiently so, we need to split it into
two 128-bit vecs and combine them with VINSERTX128.

b. Operands like (extract_vector_elt (v8f32 %vreg0), Constant<7>) needs to be
split into a VEXTRACTX128 and a further extract_vector_elt from the
resulting 128-bit vector.

c. The extract_vector_elt from b. is lowered into a shuffle to the first
element and a movss.

Depending on the order in which we legalize the BUILD_VECTOR and its
operands[1], buildFromShuffleMostly may be faced with:

(v4f32 (BUILD_VECTOR (extract_vector_elt
                      (vector_shuffle<1,u,u,u> (extract_subvector %vreg0, Constant<4>), undef),
                      Constant<0>),
                     (extract_vector_elt
                      (vector_shuffle<2,u,u,u> (extract_subvector %vreg0, Constant<4>), undef),
                      Constant<0>),
                     (extract_vector_elt
                      (vector_shuffle<3,u,u,u> (extract_subvector %vreg0, Constant<4>), undef),
                      Constant<0>),
                     %vreg1))

In order to figure out the underlying vector and their identity we need to see
through the shuffles.

[1] Note that the order in which operations and their operands are legalized is
only guaranteed in the first iteration of LegalizeDAG.

Fixes <rdar://problem/16296956>

llvm-svn: 206634
2014-04-18 19:44:16 +00:00
Benjamin Kramer ee26b621f0 DebugInfo: Remove some initializer lists to make MSVC happy again.
llvm-svn: 206632
2014-04-18 19:01:53 +00:00
David Blaikie 583a31c976 Add range access to MCAssembler's symbol collection.
llvm-svn: 206631
2014-04-18 18:24:25 +00:00
Reid Kleckner d861811e46 Update comment in LLVMBitCodes.h to reflect the actual bitcode record
llvm-svn: 206630
2014-04-18 18:19:18 +00:00
Matt Arsenault 3add036dc7 Fix uint -> size_t conversion warning.
This warning is disabled for the LLVM build,
but external users of the header can still
run into this.

Patch by Ke Bai

llvm-svn: 206629
2014-04-18 18:08:31 +00:00
Duncan P. N. Exon Smith 0842ff36a6 Revert "blockfreq: Rewrite BlockFrequencyInfoImpl" (#2)
This reverts commit r206622 and the MSVC fixup in r206626.

Apparently the remotely failing tests are still failing, despite my
attempt to fix the nondeterminism in r206621.

llvm-svn: 206628
2014-04-18 17:56:08 +00:00
Greg Fitzgerald 986b407f31 Fixed llvm-build when no targets are enabled
llvm-svn: 206627
2014-04-18 17:39:50 +00:00
Duncan P. N. Exon Smith 38fe464df0 Fixing MSVC after r206622?
llvm-svn: 206626
2014-04-18 17:38:01 +00:00
Andrew Trick 1766f93b35 Better comments to explain buffered/unbuffered processor resources.
llvm-svn: 206625
2014-04-18 17:35:08 +00:00
Alexey Samsonov 762343da6c [DWARF parser] Refactor fetching DIE address ranges.
Add a helper method to get address ranges specified in a DIE
(either by DW_AT_low_pc/DW_AT_high_pc, or by DW_AT_ranges). Use it
to untangle and simplify the code.

No functionality change.

llvm-svn: 206624
2014-04-18 17:25:46 +00:00
Duncan P. N. Exon Smith f8361d127a Reapply "blockfreq: Rewrite BlockFrequencyInfoImpl"
This reverts commit r206556, effectively reapplying commit r206548 and
its fixups in r206549 and r206550.

In an intervening commit I've added target triples to the tests that
were failing remotely [1] (but passing locally).  I'm hoping the mystery
is solved?  I'll revert this again if the tests are still failing
remotely.

[1]: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/1816

llvm-svn: 206622
2014-04-18 17:22:25 +00:00
Duncan P. N. Exon Smith c812b5b33a Add some target triples for better determinism
These tests were failing on some buildbots after r206548 (reverted in
r206556), but passing locally.

They were missing target triples, so maybe that's the problem?

llvm-svn: 206621
2014-04-18 17:22:19 +00:00
Benjamin Kramer 889873d890 LineIterator: Add DataTypes.h for int64_t on MSVC.
llvm-svn: 206617
2014-04-18 16:57:01 +00:00
Benjamin Kramer 753a7ab8c5 Add some missing includes for various standard library implementations.
llvm-svn: 206616
2014-04-18 16:46:29 +00:00
Benjamin Kramer e677b8dab1 Make the copy member of StringRef/ArrayRef generic wrt allocators.
Doesn't make sense to restrict this to BumpPtrAllocator. While there
replace an explicit loop with std::equal. Some standard libraries know
how to compile this down to a ::memcmp call if possible.

llvm-svn: 206615
2014-04-18 16:36:15 +00:00
Tim Northover ff046b64d9 AArch64/ARM64: add more NEON tests.
Mostly no testing this time, since they were just wrangling
target-specific intrinsics.

llvm-svn: 206613
2014-04-18 14:54:53 +00:00
Benjamin Kramer 29af3ed457 Allocator: Remove ReferenceAdder hack.
This was a workaround for compilers that had issues with reference
collapsing.

llvm-svn: 206612
2014-04-18 14:54:51 +00:00
Tim Northover 37d9a9cebf ARM64: disable generation of .loh directives outside MachO.
Part of PR19455.

llvm-svn: 206611
2014-04-18 14:54:46 +00:00
Tim Northover be1d1b6681 ARM64: don't emit .subsections_via_symbols on ELF.
Part of PR19455.

llvm-svn: 206610
2014-04-18 14:54:41 +00:00
Tim Northover be3941cc79 ARM64: add extra NEG pattern.
llvm-svn: 206609
2014-04-18 14:54:35 +00:00
Tim Northover 0dbdfb8522 AArch64/ARM64: port more AArch64 tests to ARM64.
llvm-svn: 206592
2014-04-18 13:16:55 +00:00
Tim Northover e3028832d1 AArch64/ARM64: add non-scalar lowering for more FCVT operations.
llvm-svn: 206591
2014-04-18 13:16:42 +00:00
Tim Northover 01f315a556 AArch64/ARM64: improve spotting of EXT instructions from VECTOR_SHUFFLE.
We couldn't cope if the first mask element was UNDEF before, which
isn't ideal.

llvm-svn: 206588
2014-04-18 12:50:58 +00:00
Evgeniy Stepanov 65120ec8c6 [msan] Add -msan-instrumentation-with-call-threshold.
This flag replaces inline instrumentation for checks and origin stores with
calls into MSan runtime library. This is a workaround for PR17409.

Disabled by default.

llvm-svn: 206585
2014-04-18 12:17:20 +00:00
Chandler Carruth d8d865e266 [LCG] Remove all of the complexity stemming from supporting copying.
Reality is that we're never going to copy one of these. Supporting this
was becoming a nightmare because nothing even causes it to compile most
of the time. Lots of subtle errors built up that wouldn't have been
caught by any "normal" testing.

Also, make the move assignment actually work rather than the bogus swap
implementation that would just infloop if used. As part of that, factor
out the graph pointer updates into a helper to share between move
construction and move assignment.

llvm-svn: 206583
2014-04-18 11:02:33 +00:00
Chandler Carruth 54125a2ba8 [Allocator] Fix an obvious think-o with the move assignment
implementation of the SpecificBumpPtrAllocator -- we have to actually
move the subobject. =] Noticed when using this code more directly.

llvm-svn: 206582
2014-04-18 11:02:29 +00:00
Chandler Carruth 18eadd9260 [LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]

The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]

So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.

However, we still want the SCCs to be formed lazily where possible.

These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.

The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:

1) We want to discover the SCC graph in a postorder fashion. That means
   the root node will be the *last* node we find. Using the call-SCC DAG
   as the graph structure of the SCCs results in an orphaned graph until
   we discover a root.

2) We will eventually want to walk the SCC graph in parallel, exploring
   distinct sub-graphs independently, and synchronizing at merge points.
   This again is not helped by the call-SCC DAG structure.

The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.

Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.

Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.

llvm-svn: 206581
2014-04-18 10:50:32 +00:00
Benjamin Kramer e6c821ef4c X86: Pattern match scalar loads + vcvtph2ps into just vcvtph2ps.
vcvtph2ps only reads the lower 64 bits of the address passed to the
intrinsic.

llvm-svn: 206579
2014-04-18 10:45:33 +00:00
Chandler Carruth 1911882569 Revert r206565 (and r206566 which updated tests).
This commit was attributed to a different person from the person who
posted the patch to the list, and the person who posted it the list
claimed when they did that they were not the author, but that the author
was yet a third person. I don't know what is going on here, but
reverting until the attribution is clear and the author has explicitly
contributed the patch.

Also, the review hasn't really involved any of the MC maintainers and
that seems questionable too.

llvm-svn: 206576
2014-04-18 09:35:51 +00:00
Tim Northover 66c36b814f AArch64/ARM64: port atomics test to ARM64.
Covers quite a few extra instructions (like any of the max/min ones
which were broken until recently on ARM64).

llvm-svn: 206575
2014-04-18 09:31:31 +00:00
Tim Northover a2c4c71c12 AArch64/ARM64: spot a greater variety of concat_vector operations.
Code mostly copied from AArch64, just tidied up a trifle and plumbed
into the ARM64 way of doing things.

This also enables the AArch64 tests which inspired the previous
untested commits.

llvm-svn: 206574
2014-04-18 09:31:27 +00:00
Tim Northover 848bb3ced5 ARM64: implement cunning optimisation from AArch64
A vector extract followed by a dup can become a single instruction even if the
types don't match. AArch64 handled this in ISelLowering, but a few reasonably
simple patterns can take care of it in TableGen, so that's where I've put it.

llvm-svn: 206573
2014-04-18 09:31:20 +00:00
Tim Northover 5ec51a8981 ARM64: spot a vector_shuffle that maps to INS and expand.
Tests will be coming very shortly when all the optimisations needed to
support AArch64's neon-copy.ll file are committed.

llvm-svn: 206572
2014-04-18 09:31:15 +00:00
Tim Northover 46d98ea8de ARM64: nick some AArch64 patterns for extract/insert -> INS.
Tests will be committed shortly when all optimisations needed to
support AArch64's neon-copy.ll file are supported.

llvm-svn: 206571
2014-04-18 09:31:11 +00:00
Tim Northover 8b2fa3dfef AArch64/ARM64: emit all vector FP comparisons as such.
ARM64 was scalarizing some vector comparisons which don't quite map to
AArch64's compare and mask instructions. AArch64's approach of sacrificing a
little efficiency to emulate them with the limited set available was better, so
I ported it across.

More "inspired by" than copy/paste since the backend's internal expectations
were a bit different, but the tests were invaluable.

llvm-svn: 206570
2014-04-18 09:31:07 +00:00
Tim Northover 0a44e66bb8 AArch64/ARM64: port BSL logic from AArch64 & enable test.
I enhanced it a little in the process. The decision shouldn't really be beased
on whether a BUILD_VECTOR is a splat: any set of constants will do the job
provided they're related in the correct way.

Also, the BUILD_VECTOR could be any operand of the incoming AND nodes, so it's
best to check for all 4 possibilities rather than assuming it'll be the RHS.

llvm-svn: 206569
2014-04-18 09:31:01 +00:00
Tim Northover 547a4ae6fa AArch64/ARM64: copy byval implementation from AArch64.
It's not actually used to handle C or C++ ABI rules on ARM64, but could well be
emitted by other language front-ends, so it's as well to have a sensible
implementation.

llvm-svn: 206568
2014-04-18 09:30:52 +00:00
Jiangning Liu 300a6b84f2 Add missing config file for newly added test case introduced by r206563.
llvm-svn: 206567
2014-04-18 09:05:50 +00:00
Yaron Keren d0751ce197 Updated test with register names following r206565.
llvm-svn: 206566
2014-04-18 08:50:09 +00:00
Yaron Keren 8ca45e0c05 Patch by Ray Donnelly.
Emit WIN64 SEH registers by name instead of just number.

llvm-svn: 206565
2014-04-18 08:03:38 +00:00
Kostya Serebryany 22e8810838 [asan] one more workaround for PR17409: don't do BB-level coverage instrumentation if there are more than N (=1500) basic blocks. This makes ASanCoverage work on libjpeg_turbo/jchuff.c used by Chrome, which has 1824 BBs
llvm-svn: 206564
2014-04-18 08:02:42 +00:00
Jiangning Liu ad874fca28 This commit allows vectorized loops to be unrolled by a factor of 2 for AArch64.
A new test case is also added for ARM64.

Patched by Z.Zheng

llvm-svn: 206563
2014-04-18 07:57:54 +00:00
Matt Arsenault 209a7b92b5 R600: Minor cleanups.
Fix indentation, better line wrapping, unused includes.

llvm-svn: 206562
2014-04-18 07:40:20 +00:00
Lang Hames bc876017c2 [ExecutionEngine] Allow JIT clients to enable/disable module verification.
Previously module verification was always enabled, with no way to turn it off.
As of this commit, module verification is on by default in Debug builds, and off
by default in release builds. The default behaviour can be overridden by calling
setVerifyModules(bool) on the JIT instance (this works for both the old JIT, and
MCJIT).

<rdar://problem/16150008>

llvm-svn: 206561
2014-04-18 06:48:23 +00:00
Jiangning Liu 40d81e10c5 This is one of the optimizations ported from ARM64 to AArch64 to address the performance gap between these two back ends. The test case newly added for AArch64 already exists in ARM64.
Patched by Z.Zheng

llvm-svn: 206559
2014-04-18 05:58:09 +00:00
Matt Arsenault 78b8670aac R600/SI: Try to use scalar BFE.
Use scalar BFE with constant shift and offset when possible.
This is complicated by the fact that the scalar version packs
the two operands of the vector version into one.

llvm-svn: 206558
2014-04-18 05:19:26 +00:00
Jiangning Liu e56c30614f This commit enables unaligned memory accesses of vector types on AArch64 back end. This should boost vectorized code performance.
Patched by Z. Zheng

llvm-svn: 206557
2014-04-18 03:58:38 +00:00
Duncan P. N. Exon Smith e576167df8 Revert "blockfreq: Rewrite BlockFrequencyInfoImpl"
This reverts commits r206548, r206549 and r206549.

There are some unit tests failing that aren't failing locally [1], so
reverting until I have time to investigate.

[1]: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/1816

llvm-svn: 206556
2014-04-18 02:17:43 +00:00
Justin Bogner e877eda50d OnDiskHashTable: Provide iterator_range for keys and data
llvm-svn: 206555
2014-04-18 02:10:26 +00:00
Duncan P. N. Exon Smith 878cf2b804 blockfreq: Really fix r206548 (and r206549)
Turns out this code is dead.

llvm-svn: 206554
2014-04-18 02:10:09 +00:00
Jim Grosbach 5198f3e988 c++11: Tidy up tblgen w/ range loops.
IntrInfoEmitter cleanup.

llvm-svn: 206553
2014-04-18 02:09:07 +00:00
Jim Grosbach af81445411 iterator access to scheduling classes
llvm-svn: 206552
2014-04-18 02:09:04 +00:00
Jim Grosbach 56fd888e10 iterator_range accessor for CodeGenTarget instruction list.
llvm-svn: 206551
2014-04-18 02:09:02 +00:00
Jim Grosbach 0e28a3554b iterator based accessors for CodeGenInstruction operand list.
llvm-svn: 206550
2014-04-18 02:08:58 +00:00
Duncan P. N. Exon Smith c7abca54cf blockfreq: Fixing MSVC after r206548?
llvm-svn: 206549
2014-04-18 02:06:24 +00:00
Duncan P. N. Exon Smith 12e68e1733 blockfreq: Rewrite BlockFrequencyInfoImpl
Rewrite the shared implementation of BlockFrequencyInfo and
MachineBlockFrequencyInfo entirely.

The old implementation had a fundamental flaw:  precision losses from
nested loops (or very wide branches) compounded past loop exits (and
convergence points).

The @nested_loops testcase at the end of
test/Analysis/BlockFrequencyAnalysis/basic.ll is motivating.  This
function has three nested loops, with branch weights in the loop headers
of 1:4000 (exit:continue).  The old analysis gives non-sensical results:

    Printing analysis 'Block Frequency Analysis' for function 'nested_loops':
    ---- Block Freqs ----
     entry = 1.0
     for.cond1.preheader = 1.00103
     for.cond4.preheader = 5.5222
     for.body6 = 18095.19995
     for.inc8 = 4.52264
     for.inc11 = 0.00109
     for.end13 = 0.0

The new analysis gives correct results:

    Printing analysis 'Block Frequency Analysis' for function 'nested_loops':
    block-frequency-info: nested_loops
     - entry: float = 1.0, int = 8
     - for.cond1.preheader: float = 4001.0, int = 32007
     - for.cond4.preheader: float = 16008001.0, int = 128064007
     - for.body6: float = 64048012001.0, int = 512384096007
     - for.inc8: float = 16008001.0, int = 128064007
     - for.inc11: float = 4001.0, int = 32007
     - for.end13: float = 1.0, int = 8

Most importantly, the frequency leaving each loop matches the frequency
entering it.

The new algorithm leverages BlockMass and PositiveFloat to maintain
precision, separates "probability mass distribution" from "loop
scaling", and uses dithering to eliminate probability mass loss.  I have
unit tests for these types out of tree, but it was decided in the review
to make the classes private to BlockFrequencyInfoImpl, and try to shrink
them (or remove them entirely) in follow-up commits.

The new algorithm should generally have a complexity advantage over the
old.  The previous algorithm was quadratic in the worst case.  The new
algorithm is still worst-case quadratic in the presence of irreducible
control flow, but it's linear without it.

The key difference between the old algorithm and the new is that control
flow within a loop is evaluated separately from control flow outside,
limiting propagation of precision problems and allowing loop scale to be
calculated independently of mass distribution.  Loops are visited
bottom-up, their loop scales are calculated, and they are replaced by
pseudo-nodes.  Mass is then distributed through the function, which is
now a DAG.  Finally, loops are revisited top-down to multiply through
the loop scales and the masses distributed to pseudo nodes.

There are some remaining flaws.

  - Irreducible control flow isn't modelled correctly.  LoopInfo and
    MachineLoopInfo ignore irreducible edges, so this algorithm will
    fail to scale accordingly.  There's a note in the class
    documentation about how to get closer.  See also the comments in
    test/Analysis/BlockFrequencyInfo/irreducible.ll.

  - Loop scale is limited to 4096 per loop (2^12) to avoid exhausting
    the 64-bit integer precision used downstream.

  - The "bias" calculation proposed on llvmdev is *not* incorporated
    here.  This will be added in a follow-up commit, once comments from
    this review have been handled.

llvm-svn: 206548
2014-04-18 01:57:45 +00:00
Matt Arsenault 27cc958dff R600/SI: Match sign_extend_inreg to s_sext_i32_i8 and s_sext_i32_i16
llvm-svn: 206547
2014-04-18 01:53:18 +00:00
Paul Robinson f4e6a5436d Fix example for VS2012.
llvm-svn: 206544
2014-04-18 01:20:08 +00:00
Duncan P. N. Exon Smith 49f3ec80c2 PMBuilder: Expose an option to disable tail calls
Adds API to allow frontends to disable tail calls in PassManagerBuilder.

<rdar://problem/16050591>

llvm-svn: 206542
2014-04-18 01:05:15 +00:00
Tom Stellard 1aa6cb4d88 R600/SI: Use SReg_64 instead of VSrc_64 when selecting BUILD_PAIR
llvm-svn: 206541
2014-04-18 00:36:21 +00:00
Jim Grosbach 6bfe18a365 [ARM64,C++11] Range'ify another loop.
llvm-svn: 206539
2014-04-17 23:41:57 +00:00
Diego Novillo 0915c047c2 Fix bug 19437 - Only add discriminators for DWARF 4 and above.
Summary:
This prevents the discriminator generation pass from triggering if
the DWARF version being used in the module is prior to 4.

Reviewers: echristo, dblaikie

CC: llvm-commits

Differential Revision: http://reviews.llvm.org/D3413

llvm-svn: 206507
2014-04-17 22:33:50 +00:00
Nuno Lopes 9ced19abe8 remove some dead code
lib/Analysis/IPA/InlineCost.cpp         |   18 ------------------
 lib/Analysis/RegionPass.cpp             |    1 -
 lib/Analysis/TypeBasedAliasAnalysis.cpp |    1 -
 lib/Transforms/Scalar/LoopUnswitch.cpp  |   21 ---------------------
 lib/Transforms/Utils/LCSSA.cpp          |    2 --
 lib/Transforms/Utils/LoopSimplify.cpp   |    6 ------
 utils/TableGen/AsmWriterEmitter.cpp     |   13 -------------
 utils/TableGen/DFAPacketizerEmitter.cpp |    7 -------
 utils/TableGen/IntrinsicEmitter.cpp     |    2 --
 9 files changed, 71 deletions(-)

llvm-svn: 206506
2014-04-17 22:26:44 +00:00
Reed Kotler 720c5ca4ea Start pushing changes for Mips Fast-Isel
llvm-svn: 206505
2014-04-17 22:15:34 +00:00
Louis Gerbarg e43a24f444 Make test/CodeGen/ARM64/vector-insertion.ll explicitly select neon syntax
Change the command line vector-insertion.ll to explicitly set the neon syntax
to apple so that buildbots that default to other syntaxes won't fail.

llvm-svn: 206502
2014-04-17 21:32:41 +00:00
Tom Stellard aeeea8a864 R600: Add comment clariying use of sext for result of MUL_U24
llvm-svn: 206501
2014-04-17 21:00:13 +00:00
Tom Stellard 868fd92e54 R600/SI: Stop using i128 as the resource descriptor type
Having i128 as a legal type complicates the legalization phase.  v4i32
is already a legal type, so we will use that instead.

This fixes several piglit tests.

llvm-svn: 206500
2014-04-17 21:00:11 +00:00
Tom Stellard 334b29c7f6 R600/SI: Change default register class for i32 to SReg_32
SIFixSGPRCopies is smart enough to handle this now.

llvm-svn: 206499
2014-04-17 21:00:09 +00:00
Tom Stellard 4f3b04de21 R600/SI: Teach SIInstrInfo::moveToVALU() how to handle PHI instructions
llvm-svn: 206498
2014-04-17 21:00:07 +00:00
Tom Stellard e1a244502c R600/SI: Legalize operands after changing dst reg in FixSGPRCopies
Otherwise we may not legalize some illegal REG_SEQUENCE instructions.

llvm-svn: 206497
2014-04-17 21:00:01 +00:00
Louis Gerbarg 153e695ee2 Improve ARM64 vector creation
This patch improves the performance of vector creation in caseiswhere where
several of the lanes in the vector are a constant floating point value. It
also includes new patterns to fold together some of the instructions when the
value is 0.0f. Test cases included.

rdar://16349427

llvm-svn: 206496
2014-04-17 20:51:50 +00:00
Jim Grosbach 0fba6d98fc ARM64: [su]xtw use W regs as inputs, not X regs.
Update the SXT[BHW]/UXTW instruction aliases and the shifted reg addressing
mode handling.

PR19455 and rdar://16650642

llvm-svn: 206495
2014-04-17 20:47:31 +00:00
David Blaikie 5b01593de4 ManagedStatic is never built with a null constructor, remove support for it.
llvm-svn: 206492
2014-04-17 20:30:35 +00:00
Tim Northover 11a6082e33 ARM64: switch to IR-based atomic operations.
Goodbye code!

(Game: spot the bug fixed by the change).

llvm-svn: 206490
2014-04-17 20:00:33 +00:00
Tim Northover 0129f298c4 ARM64: add acquire/release versions of the existing atomic intrinsics.
These will be needed to support IR-level lowering of atomic
operations.

llvm-svn: 206489
2014-04-17 20:00:24 +00:00
Gerolf Hoflehner ecebc3730e Reverse 206485.
After some discussions the preferred semantics of
the always_inline attribute is
inline always when the compiler can determine
that it it safe to do so.

llvm-svn: 206487
2014-04-17 19:14:06 +00:00
Josh Magee adfde5fef6 [stack protector] Make the StackProtector pass respect ssp-buffer-size.
Previously, SSPBufferSize was assigned the value of the "stack-protector-buffer-size"
attribute after all uses of SSPBufferSize.  The effect was that the default
SSPBufferSize was always used during analysis.  I moved the check for the
attribute before the analysis; now --param ssp-buffer-size= works correctly again.

Differential Revision: http://reviews.llvm.org/D3349

llvm-svn: 206486
2014-04-17 19:08:36 +00:00
Tim Northover 037f26f212 Atomics: promote ARM's IR-based atomics pass to CodeGen.
Still only 32-bit ARM using it at this stage, but the promotion allows
direct testing via opt and is a reasonably self-contained patch on the
way to switching ARM64.

At this point, other targets should be able to make use of it without
too much difficulty if they want. (See ARM64 commit coming soon for an
example).

llvm-svn: 206485
2014-04-17 18:22:47 +00:00
Duncan P. N. Exon Smith b6f5811a3f C++11: Compatibility with (C++03 => MSVC)
llvm-svn: 206481
2014-04-17 18:02:36 +00:00
Duncan P. N. Exon Smith 8443d58a81 C++11: Document some limitations imposed by MSVC
llvm-svn: 206480
2014-04-17 18:02:34 +00:00
Matt Arsenault a90d22fad5 R600/SI: f64 frint is legal on CI
llvm-svn: 206475
2014-04-17 17:06:37 +00:00
Chad Rosier c4eb4f8827 [AArch64] Implement the getCSRFirstUseCost API, mirroring that in ARM64.
llvm-svn: 206473
2014-04-17 16:19:54 +00:00
NAKAMURA Takumi cd1fc4bc1b Inliner::OptimizationRemark: Fix crash in clang/test/Frontend/optimization-remark.c on some hosts, including --vg.
DebugLoc in Callsite would not live after Inliner. It should be copied before Inliner.

llvm-svn: 206459
2014-04-17 12:22:14 +00:00
Chandler Carruth 7e107dabd6 [LCG] Remove a dead declaration. This stopped being used when I switched
to a more normal move operation on the graph itself. The definition
already got removed, but I missed the declaration.

llvm-svn: 206455
2014-04-17 09:41:54 +00:00
Chandler Carruth b5f938dc00 [LCG] Move the call graph node class into the graph class's definition.
This will become necessary to build up the SCC iterators and SCC
definitions. Moving it now so that subsequent diffs are incremental.

llvm-svn: 206454
2014-04-17 09:40:13 +00:00
Chandler Carruth 4c1b05f822 Make the User::value_op_iterator a random access iterator. I had written
this code ages ago and lost track of it. Seems worth doing though --
this thing can get called from places that would benefit from knowing
that std::distance is O(1). Also add a very fledgeling unittest for
Users and make sure various aspects of this seem to work reasonably.

llvm-svn: 206453
2014-04-17 09:07:50 +00:00
Chandler Carruth b60cb315bc [LCG] Just move the allocator (now that we can) when moving a call
graph. This simplifies the custom move constructor operation to one of
walking the graph and updating the 'up' pointers to point to the new
location of the graph. Switch the nodes from a reference to a pointer
for the 'up' edge to facilitate this.

llvm-svn: 206450
2014-04-17 07:25:59 +00:00
Chandler Carruth 81f497d176 [LCG] Remove the Module reference member which we weren't using for
anything and doesn't make sense if assigning.

llvm-svn: 206449
2014-04-17 07:22:19 +00:00
Chandler Carruth 75f2ca4787 [Allocator] Make SpecificBumpPtrAllocator also movable and move
assignable.

llvm-svn: 206448
2014-04-17 07:08:56 +00:00
Craig Topper 0a9bf4c0c5 [X86] Add disassembler support for the 0x0f 0x7f form of movq %mm, %mm.
llvm-svn: 206447
2014-04-17 06:33:45 +00:00
Saleem Abdulrasool 98938f1e0a objdump: identify WoA WinCOFF/ARM correctly
Since LLVM currently only supports WinCOFF, assume that the input is WinCOFF
rather than another type of COFF file (ECOFF/XCOFF).  If the architecture is
detected as thumb (e.g. the file has a IMAGE_FILE_MACHINE_ARMNT magic) then use
a triple of thumbv7-windows.

This allows for objdump to properly handle WoA object files without having to
specify the target triple manually.

llvm-svn: 206446
2014-04-17 06:17:23 +00:00
Saleem Abdulrasool 1614b26886 MC: rework static_assert to be MSVC compatible
Visual Studio does not permit referencing a structure member as a static field
for sizeof calculations.  Resort to a pointer cast which is compatible across
Visual Studio and other compilers.

llvm-svn: 206445
2014-04-17 06:17:20 +00:00