Commit Graph

522 Commits

Author SHA1 Message Date
Arnold Schwaighofer 833a82ecde BasicAA: Use reachabilty instead of dominance for checking value equality in phi
cycles

This allows the value equality check to work even if we don't have a dominator
tree. Also add some more comments.

I was worried about compile time impacts and did not implement reachability but
used the dominance check in the initial patch. The trade-off was that the
dominator tree was required.
The llvm utility function isPotentiallyReachable cuts off the recursive search
after 32 visits. Testing did not show any compile time regressions showing my
worries unjustfied.

No compile time or performance regressions at O3 -flto -mavx on test-suite +
externals.

Addresses review comments from r198290.

llvm-svn: 198400
2014-01-03 05:47:03 +00:00
Arnold Schwaighofer 0d10a9d579 BasicAA: Fix value equality and phi cycles
When there are cycles in the value graph we have to be careful interpreting
"Value*" identity as "value" equivalence. We interpret the value of a phi node
as the value of its operands.
When we check for value equivalence now we make sure that the "Value*" dominates
all cycles (phis).

%0 = phi [%noaliasval, %addr2]
%l = load %ptr
%addr1 = gep @a, 0, %l
%addr2 = gep @a, 0, (%l + 1)
store %ptr ...

Before this patch we would return NoAlias for (%0, %addr1) which is wrong
because the value of the load is from different iterations of the loop.

Tested on x86_64 -mavx at O3 and O3 -flto with no performance or compile time
regressions.

PR18068
radar://15653794

llvm-svn: 198290
2014-01-02 03:31:36 +00:00
Matt Arsenault a8fe22baba Use correct size for address space in BasicAA.
The tests just hit this with a different sized
address space since I haven't figured out how
to use this to break it.

I thought I committed this a long time ago,
and I'm not sure why missing this hasn't caused
any problems.

llvm-svn: 194903
2013-11-16 00:36:43 +00:00
Sebastian Pop a1cc34b981 improve dependence analysis testcases
print the name of the function on which the dependence analysis is performed
such that changes to the testcase are easier to review.

llvm-svn: 194528
2013-11-12 22:47:30 +00:00
Sebastian Pop c62c679c1b delinearization of arrays
llvm-svn: 194527
2013-11-12 22:47:20 +00:00
Andrew Trick 34e2f0c4ea Rewrite SCEV's backedge taken count computation.
Patch by Michele Scandale!

Rewrite of the functions used to compute the backedge taken count of a
loop on LT and GT comparisons.

I decided to split the handling of LT and GT cases becasue the trick
"a > b == -a < -b" in some cases prevents the trip count computation
due to the multiplication by -1 on the two operands of the
comparison. This issue comes from the conservative computation of
value range of SCEVs: taking the negative SCEV of an expression that
have a small positive range (e.g. [0,31]), we would have a SCEV with a
fullset as value range.

Indeed, in the new rewritten function I tried to better handle the
maximum backedge taken count computation when MAX/MIN expression are
used to handle the cases where no entry guard is found.

Some test have been modified in order to check the new value correctly
(I manually check them and reasoning on possible overflow the new
values seem correct).

I finally added a new test case related to the multiplication by -1
issue on GT comparisons.

llvm-svn: 194116
2013-11-06 02:08:26 +00:00
Hal Finkel 4d94930bcb Consider (x == -1) unlikely in BranchProbabilityInfo
This adds another heuristic to BPI, similar to the existing heuristic that
considers (x == 0) unlikely to be true. As suggested in the PACT'98 paper by
Deitrich, Cheng, and Hwu, -1 is often used to indicate an invalid index, and
equality comparisons with -1 are also unlikely to succeed. Local
experimentation supports this hypothesis: This yields a 1-2% speedup in the
test-suite sqlite benchmark on the PPC A2 core, with no significant
regressions.

llvm-svn: 193855
2013-11-01 10:58:22 +00:00
Benjamin Kramer 6094f30da2 SCEV: Make the final add of an inbounds GEP nuw if we know that the index is positive.
We can't do this for the general case as saying a GEP with a negative index
doesn't have unsigned wrap isn't valid for negative indices.
  %gep = getelementptr inbounds i32* %p, i64 -1

But an inbounds GEP cannot run past the end of address space. So we check for
the very common case of a positive index and make GEPs derived from that NUW.
Together with Andy's recent non-unit stride work this lets us analyze loops
like

  void foo3(int *a, int *b) {
    for (; a < b; a++) {}
  }

PR12375, PR12376.

Differential Revision: http://llvm-reviews.chandlerc.com/D2033

llvm-svn: 193514
2013-10-28 07:30:06 +00:00
Shuxin Yang 2e1890e18b Revert r193251 : Use address-taken to disambiguate global variable and indirect memops.
llvm-svn: 193489
2013-10-27 03:08:44 +00:00
Benjamin Kramer 0ccab2d66c X86: Custom lower sext v16i8 to v16i16, and the corresponding truncate.
Also update the cost model.

llvm-svn: 193270
2013-10-23 21:06:07 +00:00
Shuxin Yang e4fb375995 Use address-taken to disambiguate global variable and indirect memops.
Major steps include:
 1). introduces a not-addr-taken bit-field in GlobalVariable
 2). GlobalOpt pass sets "not-address-taken" if it proves a global varirable 
    dosen't have its address taken.
 3). AA use this info for disambiguation. 

llvm-svn: 193251
2013-10-23 17:28:19 +00:00
Manman Ren a1ce9891ac Simplify testing case (Thanks Rafael for the testing case).
llvm-svn: 193177
2013-10-22 18:15:50 +00:00
Manman Ren f3850c0807 TBAA: fix PR17620.
We can have a struct type with a single field and the field does not start
with 0. In that case, we should correctly update the offset.

llvm-svn: 193137
2013-10-22 01:40:25 +00:00
Matt Arsenault be18b8a3ca Fix creating bitcasts between address spaces in SCEV.
The test before wasn't successfully testing this
since it was missing the datalayout piece to change
the size of the second address space.

llvm-svn: 193102
2013-10-21 18:41:10 +00:00
Andrew Trick 768b917dc8 SCEV should use NSW to get trip count for positive nonunit stride loops.
SCEV currently fails to compute loop counts for nonunit stride
loops. This comes up frequently. It prevents loop optimization and
forces vectorization to insert extra loop checks.

For example:
void foo(int n, int *x) {
 for (int i = 0; i < n; i += 3) {
   x[i] = i;
   x[i+1] = i+1;
   x[i+2] = i+2;
 }
}

We need to properly handle the case in which limit > INT_MAX-stride. In
the above case: n > INT_MAX-3. In this case the loop counter will step
beyond the limit and overflow at the same time. However, knowing that
signed integer overlow in undefined, we can assume the loop test
behavior is arbitrary after overflow. This obeys both C undefined
behavior rules, and the more strict LLVM poison value rules.

I'm finally fixing this in response to Hal Finkel's persistence.
The most probable reason that we never optimized this before is that
we were being careful to handle case where the developer expected a
side-effect free infinite loop relying on overflow:

for (int i = 0; i < n; i += s) {
  ++j;
}
return j;

If INT_MAX+1 is a multiple of s and n > INT_MAX-s, then we might
expect an infinite loop. However there are plenty of ways to achieve
this effect without relying on undefined behavior of signed overflow.

llvm-svn: 193015
2013-10-18 23:43:53 +00:00
Chandler Carruth ea56494625 Remove the very substantial, largely unmaintained legacy PGO
infrastructure.

This was essentially work toward PGO based on a design that had several
flaws, partially dating from a time when LLVM had a different
architecture, and with an effort to modernize it abandoned without being
completed. Since then, it has bitrotted for several years further. The
result is nearly unusable, and isn't helping any of the modern PGO
efforts. Instead, it is getting in the way, adding confusion about PGO
in LLVM and distracting everyone with maintenance on essentially dead
code. Removing it paves the way for modern efforts around PGO.

Among other effects, this removes the last of the runtime libraries from
LLVM. Those are being developed in the separate 'compiler-rt' project
now, with somewhat different licensing specifically more approriate for
runtimes.

llvm-svn: 191835
2013-10-02 15:42:23 +00:00
Matt Arsenault 21981a1a0d Use CHECK-LABEL
llvm-svn: 191713
2013-09-30 23:31:55 +00:00
Manman Ren 0ed04fc9ab TBAA: handle scalar TBAA format and struct-path aware TBAA format.
Remove the command line argument "struct-path-tbaa" since we should not depend
on command line argument to decide which format the IR file is using. Instead,
we check the first operand of the tbaa tag node, if it is a MDNode, we treat
it as struct-path aware TBAA format, otherwise, we treat it as scalar TBAA
format.

When clang starts to use struct-path aware TBAA format no matter whether
struct-path-tbaa is no, and we can auto-upgrade existing bc files, the support
for scalar TBAA format can be dropped.

Existing testing cases are updated to use the struct-path aware TBAA format.

llvm-svn: 191538
2013-09-27 18:34:27 +00:00
Yi Jiang 5c343de8d3 X86 horizontal vector reduction cost model
llvm-svn: 191021
2013-09-19 17:48:48 +00:00
Arnold Schwaighofer cae8735a54 Costmodel: Add support for horizontal vector reductions
Upcoming SLP vectorization improvements will want to be able to estimate costs
of horizontal reductions. Add infrastructure to support this.

We model reductions as a series of (shufflevector,add) tuples ultimately
followed by an extractelement. For example, for an add-reduction of <4 x float>
we could generate the following sequence:

 (v0, v1, v2, v3)
   \   \  /  /
     \  \  /
       +  +

 (v0+v2, v1+v3, undef, undef)
    \      /
 ((v0+v2) + (v1+v3), undef, undef)

 %rdx.shuf = shufflevector <4 x float> %rdx, <4 x float> undef,
                           <4 x i32> <i32 2, i32 3, i32 undef, i32 undef>
 %bin.rdx = fadd <4 x float> %rdx, %rdx.shuf
 %rdx.shuf7 = shufflevector <4 x float> %bin.rdx, <4 x float> undef,
                          <4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
 %bin.rdx8 = fadd <4 x float> %bin.rdx, %rdx.shuf7
 %r = extractelement <4 x float> %bin.rdx8, i32 0

This commit adds a cost model interface "getReductionCost(Opcode, Ty, Pairwise)"
that will allow clients to ask for the cost of such a reduction (as backends
might generate more efficient code than the cost of the individual instructions
summed up). This interface is excercised by the CostModel analysis pass which
looks for reduction patterns like the one above - starting at extractelements -
and if it sees a matching sequence will call the cost model interface.

We will also support a second form of pairwise reduction that is well supported
on common architectures (haddps, vpadd, faddp).

 (v0, v1, v2, v3)
  \   /    \  /
 (v0+v1, v2+v3, undef, undef)
    \     /
 ((v0+v1)+(v2+v3), undef, undef, undef)

  %rdx.shuf.0.0 = shufflevector <4 x float> %rdx, <4 x float> undef,
        <4 x i32> <i32 0, i32 2 , i32 undef, i32 undef>
  %rdx.shuf.0.1 = shufflevector <4 x float> %rdx, <4 x float> undef,
        <4 x i32> <i32 1, i32 3, i32 undef, i32 undef>
  %bin.rdx.0 = fadd <4 x float> %rdx.shuf.0.0, %rdx.shuf.0.1
  %rdx.shuf.1.0 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
        <4 x i32> <i32 0, i32 undef, i32 undef, i32 undef>
  %rdx.shuf.1.1 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
        <4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
  %bin.rdx.1 = fadd <4 x float> %rdx.shuf.1.0, %rdx.shuf.1.1
  %r = extractelement <4 x float> %bin.rdx.1, i32 0

llvm-svn: 190876
2013-09-17 18:06:50 +00:00
Matt Arsenault a90a18e0ea Teach ScalarEvolution about pointer address spaces
llvm-svn: 190425
2013-09-10 19:55:24 +00:00
Matt Arsenault 5faa669b66 Fix lint assert on integer vector division
llvm-svn: 189290
2013-08-26 23:29:33 +00:00
Bill Wendling a08bb49c61 FileCheck-ize tests.
llvm-svn: 188971
2013-08-22 00:51:19 +00:00
Daniel Dunbar 9efbedfd35 [tests] Cleanup initialization of test suffixes.
- Instead of setting the suffixes in a bunch of places, just set one master
   list in the top-level config. We now only modify the suffix list in a few
   suites that have one particular unique suffix (.ml, .mc, .yaml, .td, .py).

 - Aside from removing the need for a bunch of lit.local.cfg files, this enables
   4 tests that were inadvertently being skipped (one in
   Transforms/BranchFolding, a .s file each in DebugInfo/AArch64 and
   CodeGen/PowerPC, and one in CodeGen/SI which is now failing and has been
   XFAILED).

 - This commit also fixes a bunch of config files to use config.root instead of
   older copy-pasted code.

llvm-svn: 188513
2013-08-16 00:37:11 +00:00
Bill Wendling dc17270968 FileCheckize some of the testcases.
llvm-svn: 187756
2013-08-05 23:43:18 +00:00
Renato Golin 0178a25fc5 Fixes ARM LNT bot from SLP change in O3
This patch fixes the multiple breakages on ARM test-suite after the SLP
vectorizer was introduced by default on O3. The problem was an illegal
vector type on ARMTTI::getCmpSelInstrCost() <3 x i1> which is not simple.

The guard protects this code from breaking (cause of the problems) but
doesn't fix the issue that is generating the odd vector in the first
place, which also needs to be investigated.

llvm-svn: 187658
2013-08-02 17:10:04 +00:00
Stephen Lin 6dd347b39f Add newlines at end of test files, no functionality change
llvm-svn: 186263
2013-07-13 22:00:58 +00:00
Hal Finkel ec474f28e3 Add the nearbyint -> FNEARBYINT mapping to BasicTargetTransformInfo
This fixes an oversight that Intrinsic::nearbyint was not being mapped to
ISD::FNEARBYINT (thus fixing the over-optimistic cost we were assigning to
nearbyint calls for some targets).

llvm-svn: 185783
2013-07-08 03:24:07 +00:00
Nick Lewycky c2ec0725ce Extend 'readonly' and 'readnone' to work on function arguments as well as
functions. Make the function attributes pass add it to known library functions
and when it can deduce it.

llvm-svn: 185735
2013-07-06 00:29:58 +00:00
Jakob Stoklund Olesen 0b075103cd Minimize precision loss when computing cyclic probabilities.
Allow block frequencies to exceed 32 bits by using the new
BlockFrequency division function.

llvm-svn: 185236
2013-06-28 22:40:43 +00:00
Preston Briggs 6c286b6029 (no commit message)
llvm-svn: 185187
2013-06-28 18:44:48 +00:00
Nadav Rotem f9ecbcb835 CostModel: improve the cost model for load/store of non power-of-two types such as <3 x float>, which are popular in graphics.
llvm-svn: 185085
2013-06-27 17:52:04 +00:00
Jakob Stoklund Olesen 6e630d46d2 Print block frequencies in decimal form.
This is easier to read than the internal fixed-point representation.

If anybody knows the correct algorithm for converting fixed-point
numbers to base 10, feel free to fix it.

llvm-svn: 184881
2013-06-25 21:57:38 +00:00
Arnold Schwaighofer a04b9ef1e8 X86 cost model: Vectorizing integer division is a bad idea
radar://14057959

llvm-svn: 184872
2013-06-25 19:14:09 +00:00
Benjamin Kramer 866793109e BlockFrequency: Bump up the entry frequency a bit.
This is a band-aid to fix the most severe regressions we're seeing from basing
spill decisions on block frequencies, until we have a better solution.

llvm-svn: 184835
2013-06-25 13:34:40 +00:00
Benjamin Kramer bfb84d0bd6 Revert "BlockFrequency: Saturate at 1 instead of 0 when multiplying a frequency with a branch probability."
This reverts commit r184584. Breaks PPC selfhost.

llvm-svn: 184590
2013-06-21 20:20:27 +00:00
Benjamin Kramer bd0f107929 BlockFrequency: Saturate at 1 instead of 0 when multiplying a frequency with a branch probability.
Zero is used by BlockFrequencyInfo as a special "don't know" value. It also
causes a sink for frequencies as you can't ever get off a zero frequency with
more multiplies.

This recovers a 10% regression on MultiSource/Benchmarks/7zip. A zero frequency
was propagated into an inner loop causing excessive spilling.

PR16402.

llvm-svn: 184584
2013-06-21 19:30:05 +00:00
Andrew Trick 3c944ba2f0 Unit test for SCEV fix r182989, PR16130.
llvm-svn: 183017
2013-05-31 16:42:41 +00:00
Michael Kuperstein f3e663af39 Make BasicAliasAnalysis recognize the fact a noalias argument cannot alias another argument, even if the other argument is not itself marked noalias.
llvm-svn: 182755
2013-05-28 08:17:48 +00:00
Diego Novillo c63995394d Add a new function attribute 'cold' to functions.
Other than recognizing the attribute, the patch does little else.
It changes the branch probability analyzer so that edges into
blocks postdominated by a cold function are given low weight.

Added analysis and code generation tests.  Added documentation for the
new attribute.

llvm-svn: 182638
2013-05-24 12:26:52 +00:00
Tim Northover 7dbbc28f72 AArch64: use MCJIT by default and enable related tests.
This just enables some testing I'd missed after implementing MCJIT
support.

llvm-svn: 181215
2013-05-06 16:51:08 +00:00
Matt Arsenault c23753a53e Fix unchecked uses of DominatorTree in MemoryDependenceAnalysis.
Use unknown results for places where it would be needed

llvm-svn: 181176
2013-05-06 02:07:24 +00:00
Tobias Grosser a7ddc98206 RegionInfo: Do not crash if unreachable block is found
llvm-svn: 181025
2013-05-03 15:48:34 +00:00
Manman Ren 662ece49de TBAA: remove !tbaa from testing cases if not used.
This will make it easier to turn on struct-path aware TBAA since the metadata
format will change.

llvm-svn: 180743
2013-04-29 22:42:01 +00:00
Manman Ren 5c37106d65 Struct-path aware TBAA: change the format of TBAAStructType node.
We switch the order of offset and field type to make TBAAStructType node
(name, parent node, offset) similar to scalar TBAA node (name, parent node).
TypeIsImmutable is added to TBAAStructTag node.

llvm-svn: 180654
2013-04-27 00:26:11 +00:00
Arnold Schwaighofer 9881dcf2f2 ARM cost model: Integer div and rem is lowered to a function call
Reflect this in the cost model. I observed this in MiBench/consumer-lame.

radar://13354716

llvm-svn: 180576
2013-04-25 21:16:18 +00:00
Jim Grosbach 563983c8a3 Legalize vector truncates by parts rather than just splitting.
Rather than just splitting the input type and hoping for the best, apply
a bit more cleverness. Just splitting the types until the source is
legal often leads to an illegal result time, which is then widened and a
scalarization step is introduced which leads to truly horrible code
generation. With the loop vectorizer, these sorts of operations are much
more common, and so it's worth extra effort to do them well.

Add a legalization hook for the operands of a TRUNCATE node, which will
be encountered after the result type has been legalized, but if the
operand type is still illegal. If simple splitting of both types
ends up with the result type of each half still being legal, just
do that (v16i16 -> v16i8 on ARM, for example). If, however, that would
result in an illegal result type (v8i32 -> v8i8 on ARM, for example),
we can get more clever with power-two vectors. Specifically,
split the input type, but also widen the result element size, then
concatenate the halves and truncate again.  For example on ARM,
To perform a "%res = v8i8 trunc v8i32 %in" we transform to:
  %inlo = v4i32 extract_subvector %in, 0
  %inhi = v4i32 extract_subvector %in, 4
  %lo16 = v4i16 trunc v4i32 %inlo
  %hi16 = v4i16 trunc v4i32 %inhi
  %in16 = v8i16 concat_vectors v4i16 %lo16, v4i16 %hi16
  %res = v8i8 trunc v8i16 %in16

This allows instruction selection to generate three VMOVN instructions
instead of a sequences of moves, stores and loads.

Update the ARMTargetTransformInfo to take this improved legalization
into account.

Consider the simplified IR:

define <16 x i8> @test1(<16 x i32>* %ap) {
  %a = load <16 x i32>* %ap
  %tmp = trunc <16 x i32> %a to <16 x i8>
  ret <16 x i8> %tmp
}

define <8 x i8> @test2(<8 x i32>* %ap) {
  %a = load <8 x i32>* %ap
  %tmp = trunc <8 x i32> %a to <8 x i8>
  ret <8 x i8> %tmp
}

Previously, we would generate the truly hideous:
	.syntax unified
	.section	__TEXT,__text,regular,pure_instructions
	.globl	_test1
	.align	2
_test1:                                 @ @test1
@ BB#0:
	push	{r7}
	mov	r7, sp
	sub	sp, sp, #20
	bic	sp, sp, #7
	add	r1, r0, #48
	add	r2, r0, #32
	vld1.64	{d24, d25}, [r0:128]
	vld1.64	{d16, d17}, [r1:128]
	vld1.64	{d18, d19}, [r2:128]
	add	r1, r0, #16
	vmovn.i32	d22, q8
	vld1.64	{d16, d17}, [r1:128]
	vmovn.i32	d20, q9
	vmovn.i32	d18, q12
	vmov.u16	r0, d22[3]
	strb	r0, [sp, #15]
	vmov.u16	r0, d22[2]
	strb	r0, [sp, #14]
	vmov.u16	r0, d22[1]
	strb	r0, [sp, #13]
	vmov.u16	r0, d22[0]
	vmovn.i32	d16, q8
	strb	r0, [sp, #12]
	vmov.u16	r0, d20[3]
	strb	r0, [sp, #11]
	vmov.u16	r0, d20[2]
	strb	r0, [sp, #10]
	vmov.u16	r0, d20[1]
	strb	r0, [sp, #9]
	vmov.u16	r0, d20[0]
	strb	r0, [sp, #8]
	vmov.u16	r0, d18[3]
	strb	r0, [sp, #3]
	vmov.u16	r0, d18[2]
	strb	r0, [sp, #2]
	vmov.u16	r0, d18[1]
	strb	r0, [sp, #1]
	vmov.u16	r0, d18[0]
	strb	r0, [sp]
	vmov.u16	r0, d16[3]
	strb	r0, [sp, #7]
	vmov.u16	r0, d16[2]
	strb	r0, [sp, #6]
	vmov.u16	r0, d16[1]
	strb	r0, [sp, #5]
	vmov.u16	r0, d16[0]
	strb	r0, [sp, #4]
	vldmia	sp, {d16, d17}
	vmov	r0, r1, d16
	vmov	r2, r3, d17
	mov	sp, r7
	pop	{r7}
	bx	lr

	.globl	_test2
	.align	2
_test2:                                 @ @test2
@ BB#0:
	push	{r7}
	mov	r7, sp
	sub	sp, sp, #12
	bic	sp, sp, #7
	vld1.64	{d16, d17}, [r0:128]
	add	r0, r0, #16
	vld1.64	{d20, d21}, [r0:128]
	vmovn.i32	d18, q8
	vmov.u16	r0, d18[3]
	vmovn.i32	d16, q10
	strb	r0, [sp, #3]
	vmov.u16	r0, d18[2]
	strb	r0, [sp, #2]
	vmov.u16	r0, d18[1]
	strb	r0, [sp, #1]
	vmov.u16	r0, d18[0]
	strb	r0, [sp]
	vmov.u16	r0, d16[3]
	strb	r0, [sp, #7]
	vmov.u16	r0, d16[2]
	strb	r0, [sp, #6]
	vmov.u16	r0, d16[1]
	strb	r0, [sp, #5]
	vmov.u16	r0, d16[0]
	strb	r0, [sp, #4]
	ldm	sp, {r0, r1}
	mov	sp, r7
	pop	{r7}
	bx	lr

Now, however, we generate the much more straightforward:
	.syntax unified
	.section	__TEXT,__text,regular,pure_instructions
	.globl	_test1
	.align	2
_test1:                                 @ @test1
@ BB#0:
	add	r1, r0, #48
	add	r2, r0, #32
	vld1.64	{d20, d21}, [r0:128]
	vld1.64	{d16, d17}, [r1:128]
	add	r1, r0, #16
	vld1.64	{d18, d19}, [r2:128]
	vld1.64	{d22, d23}, [r1:128]
	vmovn.i32	d17, q8
	vmovn.i32	d16, q9
	vmovn.i32	d18, q10
	vmovn.i32	d19, q11
	vmovn.i16	d17, q8
	vmovn.i16	d16, q9
	vmov	r0, r1, d16
	vmov	r2, r3, d17
	bx	lr

	.globl	_test2
	.align	2
_test2:                                 @ @test2
@ BB#0:
	vld1.64	{d16, d17}, [r0:128]
	add	r0, r0, #16
	vld1.64	{d18, d19}, [r0:128]
	vmovn.i32	d16, q8
	vmovn.i32	d17, q9
	vmovn.i16	d16, q8
	vmov	r0, r1, d16
	bx	lr

llvm-svn: 179989
2013-04-21 23:47:41 +00:00
Arnold Schwaighofer c0c7ff4ac0 X86 cost model: Exit before calling getSimpleVT on non-simple VTs
getSimpleVT can only handle simple value types.

radar://13676022

llvm-svn: 179714
2013-04-17 20:04:53 +00:00
Nadav Rotem 87a0af6e1b CostModel: increase the default cost of supported floating point operations from 1 to two. Fixed a few tests that changes because now the cost of one insert + a vector operation on two doubles is lower than two scalar operations on doubles.
llvm-svn: 179413
2013-04-12 21:15:03 +00:00
Manman Ren 06a9d50a35 Aliasing rules for struct-path aware TBAA.
Added PathAliases to check if two struct-path tags can alias.
Added command line option -struct-path-tbaa.

llvm-svn: 179337
2013-04-11 23:24:18 +00:00