Commit Graph

75340 Commits

Author SHA1 Message Date
Duncan P. N. Exon Smith 6e6428d981 Revert "Use the integrated assembler by default on 32-bit PowerPC and SPARC"
This reverts commit r225213.  It's failing on multiple buildbots [1][2].

[1]: http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/22032
[2]: http://lab.llvm.org:8080/green/view/Clang/job/clang-stage1-cmake-RA-incremental_check/2357/

llvm-svn: 225222
2015-01-05 23:31:51 +00:00
Hal Finkel 6837077fcf [PowerPC] Remove old README.txt entry
We no longer generate horrible code for the stated function:

void f(signed char *a, _Bool b, _Bool c) {
  signed char t = 0;
  if (b)  t = *a;
  if (c)  *a = t;
}

for which we now generate:

.L.f:
        andi. 5, 5, 1
        cmpldi 1, 4, 0
        li 5, 0
        beq 1, .LBB0_2
        lbz 5, 0(3)
.LBB0_2:                                # %if.end
        bclr 4, 1, 0
        stb 5, 0(3)
        blr

so we don't need the README.txt entry.

llvm-svn: 225217
2015-01-05 22:20:22 +00:00
Simon Pilgrim 4c55af6850 [X86][SSE] lowerVectorShuffleAsByteShift tidyup
Removed local isSequential predicate and use standard helper isSequentialOrUndefInRange instead.

llvm-svn: 225216
2015-01-05 22:08:48 +00:00
Hal Finkel 9187711f08 [PowerPC] Convert a README.txt entry into a better test
We now produce the desired code as noted in the README.txt file (no spurious
or). Remove the README entry and improve the regression test.

llvm-svn: 225214
2015-01-05 21:53:52 +00:00
Brad Smith 9cf1e26201 Use the integrated assembler by default on 32-bit PowerPC and SPARC
llvm-svn: 225213
2015-01-05 21:48:16 +00:00
Hal Finkel f4044b02a5 [PowerPC] Remove README.txt entry
This entry has been rendered irrelevant now that we have proper CR bit
tracking.

llvm-svn: 225211
2015-01-05 21:41:26 +00:00
Colin LeMahieu dacf057bdc [Hexagon] Adding add/sub with carry, logical shift left by immediate and memop instructions. Removing old defs without bits and updating references.
llvm-svn: 225210
2015-01-05 21:36:38 +00:00
Hal Finkel c7d35bb5b1 [PowerPC] Add a test for truncating a shifted load
We now produce the desired code as noted in the README.txt file. Remove the
README entry and add a regression test.

llvm-svn: 225209
2015-01-05 21:33:14 +00:00
Frederic Riss e541e0b327 Make DIE.h a public CodeGen header.
dsymutil would like to use all the AsmPrinter/MCStreamer infrastructure
to stream out the DWARF. In order to do so, it will reuse the DIE object
and so this header needs to be public.

The interface exposed here has some corners that cannot be used without a
DwarfDebug object, but clients that want to stream Dwarf can just avoid
these.

Differential Revision: http://reviews.llvm.org/D6695

llvm-svn: 225208
2015-01-05 21:29:41 +00:00
Hal Finkel a4750dec99 [PowerPC] Add another test for load/store with update
We now produce the desired code as noted in the README.txt file. Remove the
README entry and add a regression test.

llvm-svn: 225205
2015-01-05 21:22:42 +00:00
Hal Finkel 200d2ad188 [PowerPC] Fold i1 extensions with other ops
Consider this function from our README.txt file:

  int foo(int a, int b) { return (a < b) << 4; }

We now explicitly track CR bits by default, so the comment in the README.txt
about not really having a SETCC is no longer accurate, but we did generate this
somewhat silly code:

        cmpw 0, 3, 4
        li 3, 0
        li 12, 1
        isel 3, 12, 3, 0
        sldi 3, 3, 4
        blr

which generates the zext as a select between 0 and 1, and then shifts the
result by a constant amount. Here we preprocess the DAG in order to fold the
results of operations on an extension of an i1 value into the SELECT_I[48]
pseudo instruction when the resulting constant can be materialized using one
instruction (just like the 0 and 1). This was not implemented as a DAGCombine
because the resulting code would have been anti-canonical and depends on
replacing chained user nodes, which does not fit well into the lowering
paradigm. Now we generate:

        cmpw 0, 3, 4
        li 3, 0
        li 12, 16
        isel 3, 12, 3, 0
        blr

which is less silly.

llvm-svn: 225203
2015-01-05 21:10:24 +00:00
Simon Pilgrim 71b96b35e1 [X86][SSE] Fixed description for isSequentialOrUndefInRange. NFC.
llvm-svn: 225202
2015-01-05 21:09:48 +00:00
Colin LeMahieu 28bb02a8c7 [Hexagon] Adding rounding reg/reg variants, accumulating multiplies, and accumulating shifts.
llvm-svn: 225201
2015-01-05 20:56:41 +00:00
Duncan P. N. Exon Smith 1c00c9f7fc IR: Prune arguments to ValueAsMetadata::ValueAsMetadata()
`LLVMContext` isn't actually used.

llvm-svn: 225200
2015-01-05 20:41:25 +00:00
Colin LeMahieu abdf2b37d8 [Hexagon] Adding V4 bit manipulating instructions, removing ALU defs without encoding bits.
llvm-svn: 225199
2015-01-05 20:35:54 +00:00
Colin LeMahieu 3acfddd6b5 [Hexagon] Adding V4 logic-logic instructions and tests.
llvm-svn: 225198
2015-01-05 20:14:58 +00:00
Colin LeMahieu ff10c8c95c [Hexagon] Adding orand, bitsplit reg/reg, and modwrap instructions.
llvm-svn: 225197
2015-01-05 20:04:40 +00:00
Hal Finkel 49557f1b42 [PowerPC] Remove zexts after i32 ctlz
The 64-bit semantics of cntlzw are not special, the 32-bit population count is
stored as a 64-bit value in the range [0,32]. As a result, it is always zero
extended, and it can be added to the PPCISelDAGToDAG peephole optimization as a
frontier instruction for the removal of unnecessary zero extensions.

llvm-svn: 225192
2015-01-05 18:52:29 +00:00
Hal Finkel 4e2c78228a [PowerPC] Remove zexts after byte-swapping loads
lhbrx and lwbrx not only load their data with byte swapping, but also clear the
upper 32 bits (at least). As a result, they can be added to the PPCISelDAGToDAG
peephole optimization as frontier instructions for the removal of unnecessary
zero extensions.

llvm-svn: 225189
2015-01-05 18:09:06 +00:00
Colin LeMahieu 5e079577e1 [Hexagon] Adding round reg/imm and bitsplit instructions.
llvm-svn: 225188
2015-01-05 18:08:21 +00:00
Saleem Abdulrasool 150a1dc5c2 SymbolRewriter: use iplist::splice
The swap implementation for iplist is currently unsupported.  Simply splice the
old list into place, which achieves the same purpose.  This is needed in order
to thread the -frewrite-map-file frontend option correctly.  NFC.

llvm-svn: 225186
2015-01-05 17:56:32 +00:00
Saleem Abdulrasool d37ce30888 SymbolRewriter: 80-column
Wrap a couple of lines.  NFC.

llvm-svn: 225185
2015-01-05 17:56:29 +00:00
Ahmed Bougacha d54c448d34 [AArch64] Improve codegen of store lane instructions by avoiding GPR usage.
We used to generate code similar to:

  umov.b        w8, v0[2]
  strb  w8, [x0, x1]

because the STR*ro* patterns were preferred to ST1*.
Instead, we can avoid going through GPRs, and generate:

  add   x8, x0, x1
  st1.b { v0 }[2], [x8]

This patch increases the ST1* AddedComplexity to achieve that.

rdar://16372710
Differential Revision: http://reviews.llvm.org/D6202

llvm-svn: 225183
2015-01-05 17:10:26 +00:00
Ahmed Bougacha f964df3640 [AArch64] Improve codegen of store lane 0 instructions by directly storing the subregister.
For 0-lane stores, we used to generate code similar to:

  fmov w8, s0
  str w8, [x0, x1, lsl #2]

instead of:

  str s0, [x0, x1, lsl #2]

To correct that: for store lane 0 patterns, directly match to STR <subreg>0.

Byte-sized instructions don't have the special case for a 0 index,
because FPR8s are defined to have untyped content.

rdar://16372710
Differential Revision: http://reviews.llvm.org/D6772

llvm-svn: 225181
2015-01-05 17:02:28 +00:00
Karthik Bhat 93f27ce886 Select lower fsub,fabs pattern to fabd on AArch64
This patch lowers patterns such as-
  fsub   v0.4s, v0.4s, v1.4s
  fabs   v0.4s, v0.4s
to
  fabd  v0.4s, v0.4s, v1.4s
on AArch64.

Review: http://reviews.llvm.org/D6791
llvm-svn: 225169
2015-01-05 13:57:59 +00:00
Charlie Turner 6632d1f67e Parse Tag_compatibility correctly.
Tag_compatibility takes two arguments, but before this patch it would
erroneously accept just one, it now produces an error in that case.

Change-Id: I530f918587620d0d5dfebf639944d6083871ef7d
llvm-svn: 225167
2015-01-05 13:26:37 +00:00
Charlie Turner 8b2caa458f Emit the build attribute Tag_conformance.
Claim conformance to version 2.09 of the ARM ABI.

This build attribute must be emitted first amongst the build attributes when
written to an object file. This is to simplify conformance detection by
consumers.

Change-Id: If9eddcfc416bc9ad6e5cc8cdcb05d0031af7657e
llvm-svn: 225166
2015-01-05 13:12:17 +00:00
Karthik Bhat 8ec742c2f9 Select lower sub,abs pattern to sabd on AArch64
This patch lowers patterns such as-
  sub	v0.4s, v0.4s, v1.4s
  abs	v0.4s, v0.4s
to
  sabd	v0.4s, v0.4s, v1.4s
on AArch64.

Review: http://reviews.llvm.org/D6781
llvm-svn: 225165
2015-01-05 13:11:07 +00:00
Chandler Carruth 539dc4b9d5 [PM] Don't run the machinery of invalidating all the analysis passes
when all are being preserved.

We want to short-circuit this for a couple of reasons. One, I don't
really want passes to grow a dependency on actually receiving their
invalidate call when they've been preserved. I'm thinking about removing
this entirely. But more importantly, preserving everything is likely to
be the common case in a lot of scenarios, and it would be really good to
bypass all of the invalidation and preservation machinery there.
Avoiding calling N opaque functions to try to invalidate things that are
by definition still valid seems important. =]

This wasn't really inpsired by much other than seeing the spam in the
logging for analyses, but it seems better ot get it checked in rather
than forgetting about it.

llvm-svn: 225163
2015-01-05 12:32:11 +00:00
Chandler Carruth e5e8fb3bf6 [PM] Add names and debug logging for analysis passes to the new pass
manager.

This starts to allow us to test analyses more easily, but it's really
only the beginning. Some of the code here is still untestable without
manual changes to create analysis passes, but I wanted to factor it into
a small of chunks as possible.

Next up in order to be able to test things are, in no particular order:
- No-op analyses passes so we don't have to use real ones to exercise
  the pass maneger itself.
- Automatic way of generating dummy passes that require an analysis be
  run, including a variant that calls a 'print' method on a pass to make
  it even easier to print out the results of an analysis.
- Dummy passes that invalidate all analyses for their IR unit so we can
  test invalidation and re-runs.
- Automatic way to print each analysis pass as it is re-run.
- Automatic but optional verification of analysis passes everywhere
  possible.

I'm not claiming I'll get to all of these immediately, but that's what
is in the pipeline at some stage. I'm fleshing out exactly what I need
and what to prioritize by working on converting analyses and then trying
to test the conversion. =]

llvm-svn: 225162
2015-01-05 12:21:44 +00:00
Craig Topper d3c02f177a Replace several 'assert(false' with 'llvm_unreachable' or fold a condition into the assert.
llvm-svn: 225160
2015-01-05 10:15:49 +00:00
Jiangning Liu 40c1b35292 Fixed a bug in memory dependence checking module of loop vectorization. The following loop should not be vectorized with current algorithm.
{code}
// loop body
   ... = a[i]          (1)
    ... = a[i+1]       (2)
 .......
a[i+1] = ....          (3)
   a[i] = ...          (4)
{code}

The algorithm tries to collect memory access candidates from AliasSetTracker, and then check memory dependences one another. The memory accesses are unique in AliasSetTracker, and a single memory access in AliasSetTracker may map to multiple entries in AccessAnalysis, which could cover both 'read' and 'write'. Originally the algorithm only checked 'write' entry in Accesses if only 'write' exists. This is incorrect and the consequence is it ignored all read access, and finally some RAW and WAR dependence are missed.

For the case given above, if we ignore two reads, the dependence between (1) and (3) would not be able to be captured, and finally this loop will be incorrectly vectorized.

The fix simply inserts a new loop to find all entries in Accesses. Since it will skip most of all other memory accesses by checking the Value pointer at the very beginning of the loop, it should not increase compile-time visibly.

llvm-svn: 225159
2015-01-05 10:08:58 +00:00
Craig Topper dc2fc8035d [X86] Remove the predicates from the register forms of the 2-byte inc and dec instructions. Remove the 32-bit mode only versions that existed for the disassembler. Move the patterns out of the instructions so they can still be qualified with predicates.
llvm-svn: 225157
2015-01-05 08:19:12 +00:00
Craig Topper 3dcdde2e92 [X86] Simplify code a little by just summing flags instead of conditionally incrementing. NFC
llvm-svn: 225156
2015-01-05 08:19:10 +00:00
Craig Topper 859677edef [X86] Remove unnecessary redeclaration of a variable with the same assignment as the beginning of the function. NFC.
llvm-svn: 225155
2015-01-05 08:19:07 +00:00
Craig Topper 9cf67c6f57 [X86] Remove a strange fixme referring to a hack that doesn't seem to exist since the code is in a comment. Can't figure out what the body of the 'if' was supposed to be anyway.
llvm-svn: 225154
2015-01-05 08:19:05 +00:00
Craig Topper e9e081cd29 [x86] Reduce text duplication for similar operand class declarations in tablegen instruction info. No functional change intended.
llvm-svn: 225153
2015-01-05 08:19:03 +00:00
Craig Topper 21f984966d [X86] Fix the immediate size to match the address size in the operand types for the move to/from absolute memory instructions.
llvm-svn: 225152
2015-01-05 08:18:59 +00:00
Hal Finkel 9bb61de1be [PowerPC] Enable speculation of cttz/ctlz
PPC has an instruction for ctlz with defined zero behavior, and our lowering of
cttz (provided by DAGCombine) is also efficient and branchless, so speculating
these makes sense.

llvm-svn: 225150
2015-01-05 05:24:42 +00:00
Chandler Carruth 73b0164fe5 [SROA] Apply a somewhat heavy and unpleasant hammer to fix PR22093, an
assert out of the new pre-splitting in SROA.

This fix makes the code do what was originally intended -- when we have
a store of a load both dealing in the same alloca, we force them to both
be pre-split with identical offsets. This is really quite hard to do
because we can keep discovering problems as we go along. We have to
track every load over the current alloca which for any resaon becomes
invalid for pre-splitting, and go back to remove all stores of those
loads. I've included a couple of test cases derived from PR22093 that
cover the different ways this can happen. While that PR only really
triggered the first of these two, its the same fundamental issue.

The other challenge here is documented in a FIXME now. We end up being
quite a bit more aggressive for pre-splitting when loads and stores
don't refer to the same alloca. This aggressiveness comes at the cost of
introducing potentially redundant loads. It isn't clear that this is the
right balance. It might be considerably better to require that we only
do pre-splitting when we can presplit every load and store involved in
the entire operation. That would give more consistent if conservative
results. Unfortunately, it requires a non-trivial change to the actual
pre-splitting operation in order to correctly handle cases where we end
up pre-splitting stores out-of-order. And it isn't 100% clear that this
is the right direction, although I'm starting to suspect that it is.

llvm-svn: 225149
2015-01-05 04:17:53 +00:00
Hal Finkel 2f61879ff4 [PowerPC] Materialize i64 constants using rotation with masking
r225135 added the ability to materialize i64 constants using rotations in order
to reduce the instruction count. Sometimes we can use a rotation only with some
extra masking, so that we take advantage of the fact that generating a bunch of
extra higher-order 1 bits is easy using li/lis.

llvm-svn: 225147
2015-01-05 03:41:38 +00:00
Chandler Carruth d174ce4ad1 [PM] Switch the new pass manager to use a reference-based API for IR
units.

This was debated back and forth a bunch, but using references is now
clearly cleaner. Of all the code written using pointers thus far, in
only one place did it really make more sense to have a pointer. In most
cases, this just removes immediate dereferencing from the code. I think
it is much better to get errors on null IR units earlier, potentially
at compile time, than to delay it.

Most notably, the legacy pass manager uses references for its routines
and so as more and more code works with both, the use of pointers was
likely to become really annoying. I noticed this when I ported the
domtree analysis over and wrote the entire thing with references only to
have it fail to compile. =/ It seemed better to switch now than to
delay. We can, of course, revisit this is we learn that references are
really problematic in the API.

llvm-svn: 225145
2015-01-05 02:47:05 +00:00
Chandler Carruth 75c11b88af [PM] Cleanup a const_cast and other machinery left over in this code
from before I removed thet non-const use of the function.

The unused variable that held the const_cast was already kindly removed
by Michael.

llvm-svn: 225143
2015-01-04 23:13:57 +00:00
Hal Finkel 241ba79f95 [PowerPC] Materialize i64 constants using rotation
Materializing full 64-bit constants on PPC64 can be expensive, requiring up to
5 instructions depending on the locations of the non-zero bits. Sometimes
materializing a rotated constant, and then applying the inverse rotation, requires
fewer instructions than the direct method. If so, do that instead.

In r225132, I added support for forming constants using bit inversion. In
effect, this reverts that commit and replaces it with rotation support. The bit
inversion is useful for turning constants that are mostly ones into ones that
are mostly zeros (thus enabling a more-efficient shift-based materialization),
but the same effect can be obtained by using negative constants and a rotate,
and that is at least as efficient, if not more.

llvm-svn: 225135
2015-01-04 15:43:55 +00:00
Michael Kuperstein 58c3f6cc31 Fix unused variable warning for non-asserts builds. NFC.
llvm-svn: 225133
2015-01-04 13:35:44 +00:00
Hal Finkel ca6375fb75 [PowerPC] Materialize i64 constants using bit inversion
Materializing full 64-bit constants on PPC64 can be expensive, requiring up to
5 instructions depending on the locations of the non-zero bits. Sometimes
materializing the bit-reversed constant, and then flipping the bits, requires
fewer instructions than the direct method. If so, do that instead.

llvm-svn: 225132
2015-01-04 12:35:03 +00:00
Chandler Carruth 66b3130cda [PM] Split the AssumptionTracker immutable pass into two separate APIs:
a cache of assumptions for a single function, and an immutable pass that
manages those caches.

The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.

Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.

For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.

llvm-svn: 225131
2015-01-04 12:03:27 +00:00
David Majnemer 087dc8b831 InstCombine: match can find ConstantExprs, don't assume we have a Value
We assumed the output of a match was a Value, this would cause us to
assert because we would fail a cast<>.  Instead, use a helper in the
Operator family to hide the distinction between Value and Constant.

This fixes PR22087.

llvm-svn: 225127
2015-01-04 07:36:02 +00:00
David Majnemer 6ee8d17bc6 ValueTracking: ComputeNumSignBits should tolerate misshapen phi nodes
PHI nodes can have zero operands in the middle of a transform.  It is
expected that utilities in Analysis don't freak out when this happens.

Note that it is considered invalid to allow these misshapen phi nodes to
make it to another pass.

This fixes PR22086.

llvm-svn: 225126
2015-01-04 07:06:53 +00:00
Lang Hames 12b12e800b [APFloat][ADT] Fix sign handling logic for FMA results that truncate to zero.
This patch adds a check for underflow when truncating results back to lower
precision at the end of an FMA. The additional sign handling logic in
APFloat::fusedMultiplyAdd should only be performed when the result of the
addition step of the FMA (in full precision) is exactly zero, not when the
result underflows to zero.

Unit tests for this case and related signed zero FMA results are included.

Fixes <rdar://problem/18925551>.

llvm-svn: 225123
2015-01-04 01:20:55 +00:00